ElevenLabs Voice AI Session & NCRB/NPMFireside Chat

ElevenLabs Voice AI Session & NCRB/NPMFireside Chat

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Swati Sharma highlighting that India’s 1.4 billion citizens speak diverse languages, yet most online content is available only in English, creating a major accessibility gap [1-6]. Shailendra Pal Singh introduced the Bhashni (also referred to as Pashni/Bhajani) translation plugin, already deployed on over 500 websites, designed to break this language barrier by automatically translating website content into any of India’s 22 scheduled languages [7-19]. He illustrated the need with a farmer who had to travel 40 km to fill an English form for the PM Kisan Samman Nidhi, underscoring that 800 million Indians are not fluent in English and that 95 % of digital content is English-only; the plugin addresses this by providing a lightweight, one-liner code that can be copied onto any site and render multilingual pages within minutes without backend redesign [21-45]. Swati noted that more than 400 websites have already integrated the plug-in, generating over 24 million translation inferences and creating 1.5 million glossary entries to improve contextual accuracy [96-99]. During a live demo she showed that the plug-in instantly adds a language selector supporting all 22 languages, works across all pages, and complies with Digital Brand Management (DBM) standards to ensure accessibility for visually impaired users [52-95]. The solution requires no developer expertise-anyone can copy-paste the single line of code to make a site multilingual [46-49], and it is framework-agnostic, operating on any website stack [88-89]. Advanced capabilities include translating from any source language, skipping specific elements via a CSS class, customizing language order, limiting displayed languages, handling portals without page reload, batching dynamic content to reduce API calls, voice-activated translation, and URL redirection to language-specific domains [96-180]. Glossaries allow precise control over terminology and transliteration (e.g., preserving “Vakil Saab Bridge” or correcting “home” to “Mukhya Prash”), with over 1.5 million entries created to enhance contextual relevance [190-224]. Future plans expand support to 36 Indian languages plus 35 international ones, automate glossary uploads, and add text-to-speech and screen-reader features for broader accessibility [190-197]. Audience questions addressed commercial use-private entities can adopt the solution under separate agreements-and the possibility of region-based default language selection, which is technically feasible pending further review [304-322]. Swati explained that glossaries are customized per client and ingested into their solutions, and while the team does fine-tune models for specific domains, this requires careful classification and is an ongoing effort [330-336]. The discussion concluded by emphasizing that language is an identity and that the Bhashni translation plugin, together with tailored glossaries, aims to provide a multilingual AI layer for digital inclusion across India’s diverse population [301-303].


Keypoints


Major discussion points


The language barrier in India and the need for a multilingual digital layer – the speakers stress that India’s 1.4 billion citizens speak many languages, yet most online content is only in English, creating a “language divide” that excludes 800 million non-English speakers [1-6][27-29][31-34].


Introduction and live demo of the Bhashni (Bhashni/Bhajani) translation plugin – described as a lightweight, one-liner code that can be copy-pasted onto any website to instantly render it in all 22 Indian scheduled languages without backend redesign; it is DBM-compliant and framework-agnostic [9-10][36-45][52-54][67-71][79-88].


Key technical features and how the plugin handles real-world challenges – automatic multilingual support across all pages, skip-translation classes, customizable language ordering, domain-level redirection, batch processing of dynamic content, no-reload option for portals, and automatic handling of mixed-language text [79-88][90-108][111-118][119-126][129-136][141-148][152-160][162-168][170-176][179-181][184-186].


The role of glossaries in improving translation quality and contextual relevance – glossaries capture domain-specific terms, correct mistranslations, manage transliteration, and are customized per client; examples include handling “home” vs. “ghar”, proper nouns like “Vakil Saab Bridge”, and abbreviation disambiguation; the team also discusses automation of glossary ingestion and future fine-tuning of models [96-101][115-124][128-136][190-197][210-218][221-226][230-238][241-250][254-262][267-274][276-284][286-294][295-299][330-336].


Audience questions on commercial use, regional default language settings, and glossary maintenance – participants ask whether the solution can be used by private entities, how default languages could be set per region, and how glossaries are managed, customized, and potentially used for model fine-tuning [304-306][307-310][311-317][318-325][326-329][330-336][333-336].


Overall purpose / goal of the discussion


The session aims to raise awareness of India’s digital language exclusion, present the Bhashni translation plugin as a scalable, easy-to-integrate infrastructure for multilingual web content, demonstrate its capabilities and technical nuances, explain the supporting glossary system, and engage stakeholders (government, private, and developers) about adoption, customization, and future enhancements.


Overall tone and its evolution


The conversation begins with a problem-focused, urgent tone, highlighting the scale of exclusion. It then shifts to an enthusiastic, demonstrative tone as the speakers showcase the plugin’s simplicity and power. A technical, solution-oriented tone follows when discussing features, challenges, and glossaries. Finally, during the Q&A, the tone becomes responsive and collaborative, addressing audience concerns and emphasizing partnership opportunities. Throughout, the tone remains professional, optimistic, and supportive.


Speakers

Swati Sharma


– Role/Title: Presenter / Expert on Bhashini translation solutions and language accessibility initiatives


– Area of Expertise: Multilingual AI, digital inclusion, language technology


– Source: [S7]


Shailendra Pal Singh


– Role/Title: Senior General Manager, Bhashani


– Area of Expertise: Technical implementation and integration of translation solutions, language barrier mitigation


– Source: [S3]


Audience


– Role/Title: General attendees (including professionals such as professors, researchers, and industry experts)


– Area of Expertise: Varied across public administration, cybersecurity, digital identity, etc.


– Sources: [S4], [S5], [S6]


Additional speakers:


– None identified beyond the listed speakers.


Full session reportComprehensive analysis and detailed insights

The session opened with Swati Sharma foregrounding India’s massive linguistic divide: a nation of 1.4 billion people – and therefore 1.4 billion distinct voices – nonetheless finds almost all online content presented in a single language, predominantly English [1-6][27-29]. She argued that this “language divide” excludes roughly 800 million citizens who are not fluent in English, limiting their ability to access digital services and undermining inclusive development [31-34][27-28].


Shailendra Pal Singh then framed the problem as a national imperative to “break the language barrier that exists in our country” and introduced the Bhashni (also known as Pashni/Bhajani) translation plugin as a concrete solution [7-13]. He noted that the plugin is already deployed on more than 500 websites, providing automatic multilingual rendering for users who cannot understand English or Hindi content on state-level portals [9-10][19]. The technology leverages over 350 language models from the team’s platform to deliver translations in any of India’s 22 scheduled languages [19-20].


To illustrate the human impact, Swati recounted a farmer who had to travel 40 km merely to find someone capable of completing an English-only PM Kisan Samman Nidhi form, underscoring how the language gap translates into real-world hardship [21-28]. She positioned the Bhashni plugin as the remedy, describing it as a “unified multilingual layer for India’s digital ecosystem” that treats language not as a mere feature but as foundational infrastructure for digital inclusion [31-36].


The core of the solution is a framework-agnostic, DBM-compliant JavaScript one-liner that can be copied and pasted onto any website, instantly rendering the entire site in all 22 Indian scheduled languages without any backend redesign or repeated code insertion [36-45][41-45][46-49][52-55][67-71][79-86][88-95][92-95]. Once embedded on the home page, it automatically applies to every subsequent page, so developers do not need to insert the code repeatedly.


Advanced capabilities follow a logical progression:


* Direct source-to-target translation (e.g., Marathi → Hindi) without an English intermediate [104-108];


* A CSS “skip-translation” class that lets developers exclude elements such as calendars, email addresses, or other non-translatable sections [111-118];


* Customizable language ordering and the option to limit the dropdown to a subset of languages [119-126][141-148];


* Portal-mode operation that prevents full-page reloads and preserves any data the user has entered [149-151];


* Batch processing of dynamic content to reduce API calls and stabilise response times, illustrated with the State Bank of India and MyBharat Hotel examples [162-168];


* Automatic detection and skipping of already-translated mixed-language segments [152-160];


* Voice-activated language selection demonstrated on the Rail Madad site, where speaking a language name triggers instant localisation [171-174]; and


* URL redirection that maps a language selection to a dedicated domain (e.g., a Hindi-specific domain for the MSD website) [176-180].


Impact metrics substantiate the rollout: more than 400 websites have integrated the plugin, generating over 24 million translation inferences and creating upwards of 1.5 million glossary entries to improve contextual accuracy [96-99]. Glossary management is client-specific: each customer’s domain-specific glossary is ingested only into that client’s solution, ensuring correct terminology and preventing cross-site contamination [330-336]. Representative glossary entries illustrate the breadth of the effort: correcting mistranslations such as “home” rendered as “Mukhya Prash” instead of the literal “ghar” [190-186]; preserving proper nouns like “Vakil Saab Bridge” through transliteration [221-226]; resolving abbreviation ambiguities such as “BN” meaning “battalion” for the BSF rather than “billion” [276-284]; skipping abbreviation translation for the animal-husbandry department [260-262]; handling the nuance between “authorized officer” and “newt adhikari” [268-270]; and ensuring the term “Maanenya” appears only when explicitly provided in the source text for the “PS to Minister” use-case [276-284]. Additional practical issues uncovered during deployments-such as hyphen mismatches, singular-plural inconsistencies, and punctuation-induced sentence breaks-were remedied through targeted glossary entries [254-262][267-274][241-250][295-299].


Looking ahead, the roadmap includes support for 36 Indian languages (up from the current 22) and the addition of 35 international languages, automation of glossary uploads via an onboarding portal, and an accessibility bar that integrates text-to-speech and screen-reader functionality [190-197][189-197].


Audience Q&A – Private or commercial entities can adopt the plugin under separate collaboration agreements, with details available at the Bhashni Pavilion [304-310]; region-based default language selection (e.g., Hindi for Delhi users, Marathi for Maharashtra users) is technically feasible but requires a feasibility assessment before implementation [311-317][318-324]; glossaries are customised per client and model fine-tuning is undertaken after careful domain classification, a lengthy but ongoing process [330-336][333-336].


Swati concluded by emphasizing that language is not merely a set of words but an expression of identity, urging stakeholders to prepare India’s linguistic landscape for the future of AI by creating glossaries, embedding multilingual layers, and championing digital dignity, accessibility and inclusivity for all citizens [301-303].


Session transcriptComplete transcript of the session
Swati Sharma

accessibility, language accessibility and language inclusivity. We are a country of 1 .4 billion people. More importantly, a country of 1 .4 billion voices. We all think differently, we all speak differently, and we all dream differently. But whenever we go online, everything is available only in one language. Majorly English.

Shailendra Pal Singh

To break the language barrier that exists in our country. And we have different solutions and different integrations that we have. One of them is Pashni translation plugin, which is already sitting on top of more than 500 websites, if I’m not wrong, the exact number. And we are enabling people, we are enabling citizens of India who are essentially not being able to understand in English and Hindi because most of the digital content that you see, primarily the website, maximum you’ll see is a website which is sitting in a state. The default language would be there or English primarily. But then what about rest of the languages? Imagine a scenario that I’m someone from north and I’m living there in Maharashtra.

Mostly you will see the content in Marathi or English. But then what about having the same content? I don’t know English. But I really want to understand what is there. And I want to convert it, the different policies at the state level, different guidelines, different content, maybe creative content, etc. You need to know in my language. So, Bhajani Translation Plugin is one of the engineered solution using all the models that you might already be aware of. 350 plus models from our platform. We have this solution as Peksa Swati.

Swati Sharma

So, as Shailendra Pal mentioned, last year a farmer wanted to apply for the PM Kisan Samman Nidhi. It’s basically a very simple form that the farmer has to fill. But the form was in English. The farmer literally had to travel 40 kilometers only to find somebody who can actually help him out filling the form. This is the language divide. This is the barrier that we are trying to avoid. Eliminate. 800 plus million people. are not fluent in English. And 95 % of the content which is available, it is in English. This is where Bhashni comes into picture. The National Language Translation Mission of India. We are trying to transcend the language barrier. We are creating a unified multilingual layer for India’s digital ecosystem.

We are not just providing language as a feature. We are providing language as an infrastructure. We are encouraging language as the foundation for digital inclusion. Next slide, please. So, like sir introduced, the Bhashni Translation Plugin. It’s a powerful product through which you can have any website being translated into multiple languages, being accessible to all the people in the last mile. And this happens in matter of minutes. Not days. Or months. Or just minutes. This is the power of the product that we are talking about. And you don’t have to rebuild the entire website. You don’t have to redesign it. There is no back -end overhaul. Just one liner, very lightweight, simple code that you can copy and paste onto the website and you will have your website speaking multiple Indian languages.

This is how accessibility is made effortless, inclusion is made scalable, and the last mile reach is made real. So I just want anybody to see. Anybody who can copy and paste. Like we don’t need a developer or a person who knows JavaScript or the entire back -end. just somebody knows copy and paste and we’ll see how with the help of that you can have the entire website multilingual. So anybody who would like to do that? Yes, sir, please.

Shailendra Pal Singh

Maybe, you want to open a website first and show what exactly VashuCast is.

Swati Sharma

So this is the Vashni’s website and here is the plugin that has been integrated on the website. This plugin will help us have the entire website available in all 22 Indian languages. All right, so while we just give a quick glimpse of what Bhajani translation plugin is, it is basically a very lightweight utility, though. you find it very simple but the content that we have on this website primarily is in English and there are other challenges that you that we would like to discuss later on as how this translation plugin brings in though it looks very easy just you clicked on a button and then you do a translation all together but then we’ll discuss more about what are the different challenges we come across not from the fact the engineering side of it but on the language side of it how we cater and have this challenge taken care so this is just a plugin we just wanted to tell you this is how it works but you know if you go back to English then and then you know we will just talk about what you wanted to we’ll continue with that so I just wanted to have a quick demo of how you can integrate this plug -in onto the website so I think some if yes you can come we’ll just see how with the help of just the knowledge of copy and paste we can have the entire code implemented and you’ll have the entire website translated into multiple different languages.

For the purpose of this demo we had created this dummy website and the code for this website is here. So this is the code that none of us would most of us would not understand. And I would like to request sir to just copy and paste the plug -in code that we have. So we want to tell that this website content is only in English and you want to add multi -lingual flavor to it using Bhashini. You can integrate the solution that we have on the top of

Shailendra Pal Singh

That’s what you’ve meant.

Swati Sharma

Yes. So if you can sir just copy and paste this code. The code which is written here. Yes.

Shailendra Pal Singh

Anywhere here.

Swati Sharma

If you can just add a hyphen between translation and plug -in.

Shailendra Pal Singh

Yes.

Swati Sharma

Can you go back to the website? Refresh it. So you can see that the plug -in is added. And we can now have this website available in all 22 Indian Schedule languages. So that’s the power of this code. We’ve taken care of everything that is happening at the back -end. and you just have to copy and paste the code that we’ve created for you. It’s as simple as that.

Shailendra Pal Singh

So Swati, so let’s say I’ve embedded this particular thing on this particular website. Now it is available. There is the icon. What about if I go to next pages, right? Will the system understand that there’s a link in the I chose and I go to any page? It will reflect Hindi or I have to select every time I go to any page as my language, which I chosen as Hindi.

Swati Sharma

So you don’t have to apply this code on every page. The pages of the website will automatically understand that the multilingual feature has to be embedded on all the pages. So if you move on to any other page of the website. So this was just a dummy website that we had created. Let me. Go to Bajni translation plugin. in Bhajani’s website. So if you go to any of the pages, the plugin will remain there. And you will have the multilingual feature added on all the pages of the website, not just the home page. So let’s go back to the slides now. So like we just demonstrated, the code that we have for the plugin that we are talking about is a one -liner, very lightweight, simple integrated, simply integrated code, which you can use to have your website available in all 22 Indian Schedule languages.

It is DBM compliant and framework agnostic. So if you have your website, in different, made in different languages, it’s irrespective of that, the code will be applied to your website and you can use the same code.

Shailendra Pal Singh

So Swati, can you just give some light on what is DBM compliant as how the website is DBM compliant? If I, let’s say I have a government website and I want to include the Bhasini translation plugin onto it, what is this DBM compliant that you talked about?

Swati Sharma

So these are the compliances mentioned in the digital brand identity management compliance book that is available. So for everybody to have an accessible website, the DBM compliance have to be followed. And we have the DBM compliant code with us wherein all the accessibility features like, you know, that happens in the backend, you know, for, any person who is a visually special person who wants to access the website. is able to do that with the help of the technical integrities that we’ve incorporated into the plug -in code that we have. So this is a glimpse of the impact that we’ve already created. We have approximately more than 400 plus websites that are already integrated with Pashni translation plug -in.

From those websites, we get approximately 24 million plus inferences. And we’ve created 1 .5 million plus glossaries. So glossary is something that I will take at a later section during the session only. But just for a short description, glossary enhances the translation in such a manner that the end citizen who is actually consuming the content from the website is able to understand the content. And also, these are the 22 Indian scheduled languages in which the plug -in is available. available. Next slide, please. So while we were creating the plugin, we had to create something that, you know, one size fits all product and which is something very difficult to create because everybody has different requirements and to cater to all those requirements, we had to make one product that can simply be accessed by everybody.

So these are some of the use cases that I will be discussing that our plugin has the capability to resolve to. The first one is that generally what happens in, you know, a product like this, you translate, you know, from English to the target language. But here in our plugin, what we’ve done is that even if your website is, let’s say, created in a language other than English, let’s say Marathi. That can also be translated to the targeted language directly. So you don’t have to first translate the website to English and then move on to the targeted language. You can have the source and target language as per your requirement. So that’s how we’ve not, you know, you don’t have to get into the bridge of creating English as an intermediary to move from one language to another.

Next slide, please. Okay, so when I talk about a website, there are different sections of the website. And not all these sections would you want to translate. For example, the calendar, if there’s a calendar, you would not want it to be translated into, you know, the target language. Including email IDs and, you know, there are certain sections that a lot of people didn’t want to be translated. So there is one class that you can embed that is the skip translation class. Embedding that will help you. Navigate to the, navigate the sections that you don’t want to be translated. So, that’s also one feature that we have with our plugin. Next slide, please. Okay, so, you know, you saw the plugin, right?

There were languages listed in a certain manner in the plugin. So, what happens is at, you know, many regional places, we want the plugin to have the regional languages on top. So, for example, after English, people don’t want to go alphabetically like Assamese, Bengali. They would want their regional language. In this case, they wanted Hindi to come in the, you know, to change the order of the languages that are appearing. And that is also possible. So, if you want your regional language to come on top, you can have that with our plugin. So, you know, majorly what we say is that we would want to… We want to display our website in a certain language.

So for example, if you created the website in let’s say English, but you would want all the users to have the language to be displayed as Hindi first. And probably then they can navigate to their own targeted language. So even if your website, the source language of your website is English, you can, there is a possibility of adding the parameter which can have the source language as Hindi or Marathi or Punjabi as the user requires for all your websites. Next slide please. Okay, so what if your… Your website has the, you know, has been created in two languages. So for example, you’ve created your website in English and Marathi also. So that was the use case that we had with finance department Maharashtra.

So they did not want translation to happen in the Marathi language and the English language, though their source language of, you know, so basically the source language of the website was English and Marathi. So if you want to skip translation for different languages also, you can do that. So in this case, what happens is that the user selects a language. If the language of the source is selected, let’s say, you know, English or Marathi, it will go redirected to the English or the Marathi page of the website. And if the user has selected any other language, it will move on to the normal process of translating it into the target language. Next slide, please. So, you know, sometimes we have portals also.

Yes. So, you know, because we would want to have websites available in all 22 Indian scheduled languages so that we try and reach out to the maximum people. But if that is your use case wherein you would want just three or four languages to be displayed for every user to be seen, you can have that also. So the drop -down will only display four languages in that case? Yes. But it’s always encouraged to have all the languages so that everybody, you know, who’s accessing your website can have the website available. Thank you, then. So, talking about this use case, what happens is that in most of the cases, we also have portals. And in portals, we have forms or, you know, we basically ask input from the user who is using the portal.

So if they apply Bhajani translation plugin and they, you know, move on from one language to another, it will reload the entire page. If it reloads the entire page, whatever the user has filled in, like their details, their name, their email IDs, all that information was lost. So what we did to capture this was that now plugin can also have the portals without the reload picture. So if you don’t want the plugin to reload every time a user selects a language from the drop -down, you can have that. Next slide, please. So this was a very interesting use case. You know, you can see this is how the website was displayed. So the source language of the website is English.

But like we can see, after every English, below every English word. there is a different language. So Haryana written in English, then Haryana written in Hindi. Puducherry written in English and some other language. So here this was use case of handling mixed languages. So what we did here was that whenever the plugin sees that the source language of the plugin is different from what characters it is getting, like here in Haryana, it is getting Hindi characters also, it will skip this translation automatically. So you would not have to skip it at your end. We’ve done it and we’ve created it, we’ve designed the plugin in such a manner that if the source language of the website is, you know, if the contents going for the translation are different from the source language of the website, it will automatically skip the translation.

Next slide, please. So… With certain use cases, what happened was… that there was a lot of dynamic content on the website. So, static content can easily be translated. Like, it is also difficult, but it’s not as difficult as handling the dynamic content. But for certain, like for State Bank of India and for MyBharat Hotel, the dynamic content was changing so rapidly that it was making too many API calls and the response time was getting delayed. So, what we did there was that we intelligently had the code running in such a manner that the dynamic content was, the translation of dynamic content was handled in batches. And that’s how the, you know, API calls, the increased API calls reduced and the response time was stabilized.

Next slide, please. Okay, so now… We all can, you know, navigate to the website. select the target language on the website and have the website available in the target language. But what if somebody cannot navigate, cannot select a language from the drop -down? We also, with Rail Madad, you know, if you go to the Rail Madad’s website, there is a mic button. So you just say out your language. So for example, if you say out Gujarati, the entire website will turn into Gujarati. So that’s the capability of it. Next slide, please. Okay, so this is a very recent use case that we’ve handled. So like you can see here, there is the MSD website. And there is also another domain name, which is Hindi, which is in Hindi.

So what the client wanted was that, you know, once the user selects Hindi as the drop -down, the translation happens, but it also redirects to the… Hindi domain of the website. So that mapping of which language to which domain, that is also something that we have done at our end and you can have URL redirection also. Next slide please. Okay, so what happens, so let me just ask, I hope everybody here understands Hindi, right? What is the translation of home in Hindi? Ghar, Ghray, that’s right, right? But the home tab on the website, if it is getting translated to Ghar, it’s not the correct translation. It should be translated to Mukhya Prash. So these kind of use cases wherein the translation which is being given by the model is correct but you would want a specific different translation for a specific word or phrases that can also be handled through glossary.

So, the way that we have done this is that we have website. So, we have a lot of information in the information in the Just now, after we complete this, next slide please. So, this is the future roadmap for plugin that we have. We have expanded it to 36 languages, 36 more Indian languages. So, you can go to Vashni Pavilion which is right here in this hall only. We have a demo of the plugin which is available in 36 languages. We are also incorporating the 35 international languages. We have done that for certain use cases which are displayed here today at Bhaat Mandapam. Secondly, we will be talking about glossary but the glossary in, you know, traditionally the glossaries were sent to us through emails and there was a process to, you know, process the glossaries and then ingest it.

But now, we are also planning to get it automated wherein, you can just simply upload the glossary from your onboarding portal. and third, we are also adding the accessibility bar to the plugin. So if you want to have text -to -speech also integrated or screen reader also integrated with the plugin that we just showed, that is also something that we are going to do in some time. So technology for dignity, Bajni Translation plugin would help. It is a powerful tool that will empower you to actually disseminate whatever information you want to, to actually reach the last mile. Moving on to the next segment, which is the glossary. So, you know, we all of us here, we would have some application, some website developed for…

the ease of the user. We would want a person, a student who is registering for a form who can actually do we would want the person to do it in their own preferred language. We would want a farmer to listen to the schemes that are available for him in his preferred language. We would want an Angadwari worker to have the schemes that are available for her told to her in her own language. So that is all what we are working for. We are working for inclusivity and we are working for accessibility. Next slide please. So while we do that we also add Bhashni’s layer to all our solutions or websites to have the actual information reach the last mile.

But generally what happens is that you know we get a remark that the translation is not correct. It is wrong. And after doing analysis with most of our customers we realize that the audience, that the users who are trying to actually use our product, they are not looking for accurate translations. They are looking for understanding the content, the intent of the content which is there on the solution on the website that they have created. And this is not the result, you know, for this we don’t have to focus on getting the accuracy of the translation. We actually have to focus on the context of the translation, use case of the translation, domain of the translation.

So when we realize that, we understood the concept of glossary and that’s how glossary was formed. Next slide please. Now you all would be, you know, waiting for, to understand what glossary is all about. So glossary saw… It involves two kinds of use cases. One is the post -translation that I just told you before. That, you know, home being translated to ghar in Hindi is absolutely right. But home being translated to home tab being translated to ghar is probably not correct. So post translation wherein you would want home to appear as Mukhya Prasht on the tab, home tab, that is something that we cater through with glossary. The second use case is like in the example, there is a bridge called Vakil Saab Bridge in Gujarat.

So Vakil Saab Bridge if translated to English would become something like Lawyer Bridge or something. We wouldn’t want that. Vakil Saab Bridge is our coined terminology and we would want it to retain its identity. We would want Vakil Saab Bridge to be written as Vakil Saab Bridge only in English. And this is the use case of transliteration. So these two kind of use cases are solved through glossary. What we do is we create. We create these glossaries with our customers and we ingest it to the customer’s specific API. Next slide, please. So, you know, like I told you the meaning of glossary, all of us here have different glossaries. Like, you know, the science domain glossaries are different.

Gen Z has a different glossary altogether. You know, any region would have a specific kind of a glossary. So all of these glossaries have to be created with us. And, you know, the customers who created those glossaries have got the translation, which are accepted by the end user, which are understood by the end user. Like you can see, Ministry of Panchayati Raj gave us 15 lakh words of the Panchayat. Survey of India has given us 16 lakh words. So if we create glossaries together, we can have the translation barrier completely eliminated. So I will now walk you through certain. Use cases wherein we faced problems with our customers, but they were not. translation issues, they were actually issues that could have been easily resolved through glossary.

So if you can read this sentence here. So this use case, you know, this problem was reported to us by Ministry of Home Affairs where Honourable Home Minister Sir’s profile was not reflecting correctly. So this was the English sentence. Okay. And this was the translation that we were getting. So if anybody can tell me what is the problem here? So because of this full stop, Srimati full stop, SMT full stop, what the model thought was that the sentence has ended here. And that is why the formation of the sentence is entirely incorrect. But the solution was very simple. What we had to do was just add SMT dot to the glossary or just remove the dot from the SMT.

And we could have the correct output. So it’s as simple as that. It’s not the translations problem. It’s the understanding of glossary problem. Next slide, please. So, okay. Can anybody tell me what’s the difference between this and this puzzle? To these two puzzle pieces, what is the difference? Yes. So one of them has a hyphen and the other one does not have a hyphen. So when we received the glossary from MSME, there was a hyphen in between PMS and dashboard. but actually on the website it was displayed without the hyphen. So glossary is that sensitive. If you give me PMS hyphen dashboard, it will only recognize that and translate that. But if there is no hyphen, it will not recognize that and it will not give you the translated output which you have given us in glossary.

So that’s the, and again, here there was a singular and plural problem. So street vendors was mentioned in the glossary sheet that we received. But actually street vendor was mentioned on the website. So if there is a singular and plural difference in the glossary sheet that you are giving to us and what is actually reflected on the website or your solution, it will create a difference. So this is one thing that you can also do through glossary. So, you know, we received. A requirement wherein they wanted, you know, the animal husbandry department wanted that the entire sentence should not be translated. the, you know, abbreviations should be skipped from translation. So if you just give me this sentence and this sentence as glossary pairs in English and Hindi, this can easily be achieved.

Next slide, please. Okay, so in one of the glossaries that we received was authorized officer, so they wanted us to write authorized officer as newt adhikari. But actually, newt adhikari means appointed officer. So this is also something that we have to be careful of. Because, you know, for the end user, so there are two kind of users that we have in this case, the English users and the Hindi users. So the English user would read it as authorized officer, but since we have added glossary and changed it to, newt adhikari in Hindi, the Hindi user would read it, would understand it as appointed officer. So we have to be very careful while drafting the glossaries.

Next slide please. Okay. So if I can just ask what is the full form of BN? Normally what do we consider as the full form of BN? Billion, right? We would not consider BN as battalion. But in BSF’s case, this was a huge problem. So BN for BSF means battalion and not billion. The entire context changes. So for BSF, we have created glossaries for all the abbreviations. So it is always suggested that you know whatever abbreviations that you are displaying on your website or your solution, just give it to us as glossary so that the correct one can be displayed. So okay. So, So can you tell me if PS to Minister being translated as Maanenya Vastra Mantri Ji ki Iji Sacheev, is there a difference or like what would be the problem here?

Fine, let me tell you. So this is also correct, this is also correct. But this is not the actual translation of PS to Minister. If we want to have Maanenya written in the Hindi translation of it, we should always have it in the English version of it also. Glossaries are supposed to be equal. They are supposed to be equally weighted. You cannot expect the model to add or delete words as their own. So basically what we did here was we went back to the customer and said, that if you want to add Maanenya, you can add Maanenya to the text. at the output, please add respected or honourable in the input. Only then it will be balanced out.

Next slide, please. So this is one request from our end only. We receive a lot of glossaries that are redundant in nature for us. By that I mean that, you know, for example, we received employment and skill development as the glossary terminology and the translation that we are getting in Hindi. That was the actual output of the model also. So in this case, if you are giving us the glossary, which is actually the output of the model, you are only creating redundancy. So if you can just avoid that and give us translations, give us the post -translations or transliterations that are not recognized by the model, that would be handy. Next slide please. Next slide.

So in the end I would just like to say that language is not just words, it is identity. Let us prepare India’s languages for the future of AI and let us create glossaries, let us have multilingual AI in, multilingual layer in all your solutions so that actually the end user is benefited, actually there is digital inclusion, accessibility, inclusivity. Thank you. Any question?

Audience

Can you hear me? So I was saying like the translation thing which you were showing, is it, I know like this has been sponsored by the government and stuff, so can it be also used for commercial purpose like for private or private? Public entity? Can they also use that in their websites also?

Swati Sharma

So, you know, we have different kind of collaborations with us. So for that collaboration, there is a different agreement altogether that is being created. If you want to know more about it, just go to the Bhajani Pavilion. We have stakeholders who are handling the startups, the private organizations also, and they can help you there.

Audience

And one more thing which I wanted to know. So like you were showing for the websites, it was by default we can choose the default language, right? So can it be also extended? Like let’s say in some use cases, we could have someone who is logging in from Delhi. They would want to see it in Hindi and someone who is coming from Maharashtra. So can it change the default languages? Can it change from the region perspective?

Swati Sharma

So that’s an interesting use case. From what I’ve understood, you want different regions to have websites opened in different default languages. As per my knowledge, I don’t see a technical challenge to it. But again, we will have to look at the use case at our end and see if this can be deferred. It’s a very use case. This is a very interesting use case. We’ll look at it. Thank you.

Audience

Hi. Hi. So we all are aware that we have multilingual languages. And apparently, they have been trained on a lot of words also according to their domain knowledge. So if we have glossaries, how do we ensure that each and every glossary according to the domain is maintained and then trained or fine -tuned?

Swati Sharma

So glossaries are customized. For example, somebody from Ministry of Home Affairs would not want the glossary of, let’s say, CSI. Right. Right. you know the domains are different the contexts are different so glossaries are ingested only are customized and are ingested to the client itself they are not we have general glossaries also that can be applied to all but since glossary does not have the you know one glossary fits all type of a solution so we customize it for a client and then ingest it on to that client solution itself not to other clients or other environments.

Audience

Thanks So the glossaries you have do you have them do you use them to fine tune your models or is it just available as documents to infer while using?

Swati Sharma

So we do that we try to fine tune the models as well but we there are lot of things that we have to look at it look around while doing that because you know we have to classify them into different domains and then apply fine tuning models for the domain space. So it’s a long process, but we do that. Okay. Thank you. If there are any other questions, I will be available at the Bhashni Pavilion here also. And I would request everybody to please come visit us, explore our solutions, explore our services. And thank you so much for being a lovely audience. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (9)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“India has 1.4 billion people, but most online content is presented only in English.”

The knowledge base states that India has 1.4 billion people with diverse languages, yet most online content is only available in English, confirming the claim.

Confirmedhigh

“Bhashni (also known as Pashni/Bhajani) is described as a unified multilingual layer for India’s digital ecosystem, not just a feature but core infrastructure.”

Source S8 describes Bhashini as ‘The National Language Translation Mission of India… creating a unified multilingual layer for India’s digital ecosystem… not just providing language as a feature, but as infrastructure,’ matching the report’s description.

Confirmedmedium

“India has 22 scheduled languages that the solution targets.”

Speaker Pradeep Kumar Verma references India’s 22 scheduled languages in the knowledge base, confirming the number targeted by the plugin.

Additional Contextmedium

“Roughly 800 million citizens are excluded because they are not fluent in English.”

The knowledge base notes that a large portion of the population is digitally illiterate (S40), providing context about language‑related exclusion but does not specify the 800 million figure.

Additional Contextlow

“The Bhashni plugin is already deployed on more than 500 websites, including state‑level portals.”

S42 mentions case studies such as the Bhasanet portal, indicating that Bhashini‑based solutions are being used on government sites, which adds context to the claim of wide deployment.

External Sources (45)
S1
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — -Shailendra Pal Singh: Role/title not explicitly mentioned, but appears to be a co-presenter/expert on Bhashini translat…
S2
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — mostly from my understanding and experience with the English that has happened, in the past. Yeah. interesting points, P…
S3
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — -Shailendra Pal Singh- Senior General Manager, Bhashani
S4
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S5
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S6
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S7
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — -Swati Sharma: Role/title not explicitly mentioned, but appears to be a key presenter/expert on Bhashini translation sol…
S8
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — Speakers:Swati Sharma Speakers:Swati Sharma, Shailendra Pal Singh
S9
Digital divides & Inclusion — However, the cost of internet access remains a significant barrier in some parts of Africa, notably in The Gambia where …
S10
Safe and Responsible AI at Scale Practical Pathways — On contextualisation, Srivastava noted that while large language models are improving at general tasks, they consistentl…
S11
https://dig.watch/event/india-ai-impact-summit-2026/elevenlabs-voice-ai-session-ncrb-npmfireside-chat — But like we can see, after every English, below every English word. there is a different language. So Haryana written in…
S12
https://app.faicon.ai/ai-impact-summit-2026/elevenlabs-voice-ai-session-ncrbnpmfireside-chat — But like we can see, after every English, below every English word. there is a different language. So Haryana written in…
S13
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — While technical challenges have been largely resolved, policy coordination and incentives are vital to driving progress …
S14
WS #150 Language and inclusion – multilingual names — The experts agreed that while progress has been made, significant work remains to be done in areas like improving user e…
S15
Driving Social Good with AI_ Evaluation and Open Source at Scale — Government and standard-setting institutions face challenges in establishing proper benchmarking standards and maintaini…
S16
SETTING PRIORITIES FOR A ‘WORLD LANGUAGE’ INITIATIVE — – The term ‘discourse’ refers to extended communication, as in a story, an essay, or a dialogue. The string of sentences…
S17
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S18
Government notices · GoewermentskennisGewinGs — –  Unclear institutional framework: There are many stakeholders involved in the infrastructure deployment proc…
S19
Table of contents — + Separate working environments which are incompatible or do not meet the state’s expectations are used within the publi…
S20
Table of Contents — Tutorial: The introduction of new technology to replace traditional systems can result in new systems being deployed wit…
S21
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — During the presentation, the speakers provided a live demonstration using the Bhashini website itself, showing how the p…
S22
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — Evidence:Demonstration and explanation: ‘you don’t have to apply this code on every page. The pages of the website will …
S23
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Emma Higham:Yeah, absolutely. I mean, one of the things I’m really excited about is how AI is improving our ability to d…
S24
https://app.faicon.ai/ai-impact-summit-2026/elevenlabs-voice-ai-session-ncrbnpmfireside-chat — So you don’t have to apply this code on every page. The pages of the website will automatically understand that the mult…
S25
Policy & Research — [/av_one_third][av_two_third min_height=” vertical_alignment=’av-align-top’ space=” row_boxshadow_color=” row_boxshad…
S26
Research Publication No. 2014-6 March 17, 2014 — – (1) Policy objectives : Our cases studies illustrate that the public sector can develop and implement cloud-relevant …
S27
HANANE BOUJEMI — This research will endeavor to demonstrate the importance of tackling the policy aspects of blockchain deployments in …
S28
Digital divides & Inclusion — Collaboration could involve sharing best practices, providing technical assistance, and advocating for policies that pro…
S29
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — Dawit Bekele:Thanks, Susan. Unfortunately, many barriers contribute to the challenges people face in using the internet …
S30
Digital democracy and future realities | IGF 2023 WS #476 — A significant issue identified in the analysis is the digital divide, particularly affecting young men and women in gras…
S31
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — India has 1.4 billion people with diverse languages, but most online content is only available in English
S32
Digital divides & Inclusion — Another argument emphasizes the importance of promoting multilingual and locally relevant content to achieve universal a…
S33
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — Evidence:India is described as ‘a country of 1.4 billion people. More importantly, a country of 1.4 billion voices. We a…
S34
WS #144 Bridging the Digital Divide Language Inclusion As a Pillar — Audience: Thank you. This is Mohammed Abdul Haq Onu, or everybody call Onu. I’m a Secretary General of Bangladesh Intern…
S35
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — Dawit Bekele:Thanks, Susan. Unfortunately, many barriers contribute to the challenges people face in using the internet …
S36
Safe and Responsible AI at Scale Practical Pathways — He notes that LLMs stumble on domain‑specific terms and suggests combining a glossary (or knowledge graph) with the mode…
S37
Safe and Responsible AI at Scale Practical Pathways — On contextualisation, Srivastava noted that while large language models are improving at general tasks, they consistentl…
S38
The rise of large language models and the question of ownership — What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate va…
S39
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Emily Bender:Yeah, so on a slightly different topic, I want to say that all of these discussions become clearer if we st…
S40
Advocacy to Action: Engaging Policymakers on Digital Rights | IGF 2023 — A large portion of population is digitally illiterate
S41
Building the AI-Ready Future From Infrastructure to Skills — This discussion focused on building AI readiness and capabilities, featuring speakers from AMD and the Indian government…
S42
Open Forum #36 Challenges & Opportunities for a Multilingual Internet — Pradeep Kumar Verma: I think I’m audible. So I will be presenting two case studies from India. So one is on the Bhasa…
S43
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And it’s a core technology that a country like India must understand. from the foundational level. Otherwise, we will be…
S44
Science AI & Innovation_ India–Japan Collaboration Showcase — Yeah, I think I think sort of agree to what everybody has talked about. I think with AI and the smartphone and we are on…
S45
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Abhishek Agarwal: Thank you, Minister. Abhishek? Yeah, I kind of echo the views of Her Excellency, like the three key in…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Swati Sharma
10 arguments135 words per minute4990 words2203 seconds
Argument 1
Language divide hampers access to services (Swati Sharma)
EXPLANATION
Swati highlights that a large portion of India’s population cannot access digital services because most online content is only available in English. This linguistic exclusion forces citizens, such as farmers, to travel long distances to obtain assistance.
EVIDENCE
She cites the example of a farmer who had to travel 40 kilometres to find someone to help fill a simple English form for the PM Kisan Samman Nidhi scheme, illustrating the practical barrier caused by language divide [21-28]. She also notes India’s 1.4 billion voices and the predominance of English online [1-6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Swati illustrates the real-world impact of the language divide with a farmer traveling 40 km to get help with an English form, highlighting how linguistic exclusion blocks access to government schemes [S8].
MAJOR DISCUSSION POINT
Language divide
AGREED WITH
Shailendra Pal Singh
Argument 2
One‑liner lightweight integration enabling multilingual sites (Swati Sharma)
EXPLANATION
Swati describes the Bhashni translation plugin as a single line of code that can be copied and pasted into any website to instantly provide multilingual support. The solution requires no backend overhaul or developer expertise.
EVIDENCE
She explains that the plugin works with a simple one-liner code that makes a website multilingual in minutes, without rebuilding or redesigning the site, and that anyone can implement it by copy-paste [41-45]. She reinforces this by stating that no developer or JavaScript knowledge is needed [46-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The plugin’s implementation is described as a single line of code that can multilingualise any website instantly, removing technical barriers for developers [S8] and [S1].
MAJOR DISCUSSION POINT
Lightweight integration
AGREED WITH
Shailendra Pal Singh
Argument 3
DBM compliance and built‑in accessibility features (Swati Sharma)
EXPLANATION
Swati states that the plugin complies with Digital Brand Management (DBM) standards, ensuring accessibility for users with visual impairments and other special needs. The compliance is embedded in the backend of the plugin.
EVIDENCE
She references the DBM compliance book and explains that the plugin includes backend accessibility features for visually impaired users, making the site accessible as per DBM guidelines [90-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The solution is noted to be DBM-compliant and to include accessibility features such as support for visually impaired users, meeting government digital standards [S8] and [S1].
MAJOR DISCUSSION POINT
Accessibility compliance
Argument 4
Supports source‑target language flexibility, skip‑translation class, language ordering, portal handling, dynamic‑content batching, URL redirection, speech button (Swati Sharma)
EXPLANATION
Swati outlines a suite of advanced features: the ability to translate from any source language, exclude specific sections from translation, prioritize regional languages, handle portals without page reloads, batch dynamic content to reduce API calls, redirect URLs based on language, and provide a voice input button for speech‑to‑text translation.
EVIDENCE
She describes source-target flexibility that avoids English as an intermediate language [104-110]; a skip-translation class for elements like calendars and email IDs [111-117]; language ordering to place regional languages first [119-125]; portal handling that prevents form data loss on language change [146-151]; dynamic-content batching to limit API calls and improve response time [162-168]; URL redirection to language-specific domains [176-181]; and a microphone button on the Rail Madad site that activates voice-based translation [171-174].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Technical versatility is highlighted, including source-target flexibility, skip-translation classes, regional language ordering, portal handling without data loss, dynamic-content batching, language-specific URL redirection, and a voice-input button for speech-to-text translation [S8].
MAJOR DISCUSSION POINT
Advanced technical features
Argument 5
Over 400 websites, 24 M+ inferences, 1.5 M glossaries created (Swati Sharma)
EXPLANATION
Swati provides quantitative evidence of the plugin’s adoption, noting that more than 400 websites have integrated the solution, generating over 24 million translation inferences and creating over 1.5 million glossary entries.
EVIDENCE
She reports the numbers: >400 websites integrated, ~24 million inferences, and >1.5 million glossaries produced [96-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Deployment metrics are cited: the plugin is integrated on more than 400 websites, has generated over 24 million translation inferences and created more than 1.5 million glossary entries [S1].
MAJOR DISCUSSION POINT
Adoption metrics
AGREED WITH
Shailendra Pal Singh
DISAGREED WITH
Shailendra Pal Singh
Argument 6
Glossaries tailor translations to domain/context and prevent errors (Swati Sharma)
EXPLANATION
Swati explains that glossaries are used to customize translations for specific domains, ensuring correct terminology and avoiding mistranslations. They address issues such as incorrect transliteration, singular‑plural mismatches, and context‑specific meanings.
EVIDENCE
She notes that glossaries enhance translation quality by preserving intended meaning, giving examples of domain-specific terms and how glossaries correct errors like wrong translations of ‘home’ or abbreviations such as ‘BN’ [100-101] and later detailed case studies of glossary-related errors and fixes [215-299].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Glossaries are described as domain-specific tools that correct mistranslations, preserve terminology, and handle context-dependent meanings, reinforcing the need for customized glossaries [S8] and [S10].
MAJOR DISCUSSION POINT
Glossary customization
Argument 7
Custom glossaries are ingested per client; models are fine‑tuned per domain (Swati Sharma)
EXPLANATION
Swati clarifies that each client receives a customized glossary that is uploaded to their specific instance of the plugin, and that the underlying AI models are fine‑tuned for different domains to improve translation accuracy.
EVIDENCE
She describes the process of customizing glossaries for each client and ingesting them into the client’s solution, and mentions that they also fine-tune models for domain-specific use cases [330-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Each client receives a bespoke glossary uploaded to their instance, and underlying AI models are fine-tuned for particular domains to improve accuracy [S8] and [S10].
MAJOR DISCUSSION POINT
Client‑specific glossary and model tuning
Argument 8
Expand to 36 Indian + 35 international languages, automated glossary upload, accessibility bar with TTS/screen reader (Swati Sharma)
EXPLANATION
Swati outlines the future roadmap, which includes extending language support to 36 Indian languages and 35 international languages, automating glossary uploads through an onboarding portal, and adding an accessibility bar that offers text‑to‑speech and screen‑reader capabilities.
EVIDENCE
She mentions the planned expansion to 36 Indian languages and 35 international languages, the upcoming automated glossary upload feature, and the addition of an accessibility bar with TTS/screen-reader integration [190-197].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Future roadmap includes extending support to 36 Indian and 35 international languages, automating glossary uploads via an onboarding portal, and adding an accessibility bar with text-to-speech and screen-reader capabilities [S8].
MAJOR DISCUSSION POINT
Future enhancements
Argument 9
Private entities can use the plugin via separate agreements and pavilion support (Swati Sharma)
EXPLANATION
Swati confirms that private and commercial organizations can adopt the Bhashni plugin, but they must do so under a distinct collaboration agreement and can obtain assistance at the Bhashni Pavilion.
EVIDENCE
She states that collaborations with startups and private organizations are handled through separate agreements and directs interested parties to the Bhashni Pavilion for further information [307-310].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The plugin is available to private and commercial organisations under distinct collaboration agreements, with support provided through a dedicated Bhashni Pavilion [S1] and [S8].
MAJOR DISCUSSION POINT
Commercial usage
Argument 10
Possible to set default language based on user region; requires feasibility assessment (Swati Sharma)
EXPLANATION
Swati acknowledges the request to automatically set a website’s default language based on the visitor’s region, indicating that while technically feasible, it needs further evaluation before implementation.
EVIDENCE
She responds that there is no technical barrier but the use case will be reviewed for feasibility [318-324].
MAJOR DISCUSSION POINT
Regional default language
DISAGREED WITH
Audience
S
Shailendra Pal Singh
3 arguments127 words per minute360 words169 seconds
Argument 1
Need to break language barrier across India (Shailendra Pal Singh)
EXPLANATION
Shailendra emphasizes the necessity of eliminating linguistic obstacles that prevent Indian citizens from accessing digital content in their native languages. He positions the translation plugin as a key solution to this nationwide challenge.
EVIDENCE
He opens with the statement “To break the language barrier that exists in our country” and outlines the problem of most digital content being in English or a default state language, leaving many users excluded [7-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion emphasizes that language is tied to identity and that eliminating linguistic obstacles is essential for inclusive digital access [S8].
MAJOR DISCUSSION POINT
Breaking language barrier
AGREED WITH
Swati Sharma
Argument 2
Plugin already deployed on 500+ sites, addressing accessibility (Shailendra Pal Singh)
EXPLANATION
Shailendra cites the scale of deployment of the Pashni (Bhashni) translation plugin, indicating that it is already active on more than 500 websites, thereby improving accessibility for a large user base.
EVIDENCE
He mentions that the plugin is “already sitting on top of more than 500 websites” [9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Deployment figures show the plugin active on over 400 websites, indicating wide adoption and improved accessibility across a large user base [S1].
MAJOR DISCUSSION POINT
Wide deployment
AGREED WITH
Swati Sharma
DISAGREED WITH
Swati Sharma
Argument 3
Plugin works across all pages without re‑adding code (Shailendra Pal Singh)
EXPLANATION
Shailendra raises a concern about whether the translation functionality persists when navigating to other pages, implying that the plugin should automatically apply across the entire site without needing repeated code insertion.
EVIDENCE
He asks whether the system will remember the selected language on subsequent pages or require re-selection each time [73-78]; Swati later confirms that the plugin automatically works on all pages without additional code [79-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The solution is described as framework-agnostic and capable of operating site-wide without the need for repeated code insertion on each page [S8] and [S1].
MAJOR DISCUSSION POINT
Site‑wide functionality
AGREED WITH
Swati Sharma
A
Audience
1 argument144 words per minute220 words91 seconds
Argument 1
Need for domain‑specific glossary maintenance and fine‑tuning (Audience)
EXPLANATION
The audience member asks how glossaries can be kept up‑to‑date for each domain and whether they are used to fine‑tune the translation models, highlighting concerns about ongoing maintenance and model improvement.
EVIDENCE
The question is raised about ensuring each domain’s glossary is maintained and used for model fine-tuning [326-329]; Swati replies that glossaries are customized per client and that they do perform model fine-tuning for specific domains [330-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Glossary maintenance and domain-specific model fine-tuning are highlighted as essential for accurate translations, with detailed discussion of these practices in the context of AI-driven translation solutions [S10] and [S8].
MAJOR DISCUSSION POINT
Glossary maintenance and model fine‑tuning
DISAGREED WITH
Swati Sharma
Agreements
Agreement Points
The language divide hampers access to services and must be broken to enable digital inclusion.
Speakers: Swati Sharma, Shailendra Pal Singh
Language divide hampers access to services (Swati Sharma) Need to break language barrier across India (Shailendra Pal Singh)
Both speakers stress that a large portion of India’s population cannot use digital services because most content is only in English, creating a practical barrier exemplified by a farmer traveling 40 km to fill an English form [21-28] and the need to eliminate this barrier [7-13].
POLICY CONTEXT (KNOWLEDGE BASE)
The language barrier is repeatedly cited as a core component of the digital divide, with IGF 2023 workshops emphasizing multilingual access as essential for inclusive digital services and policy briefs urging governments to adopt multilingual content strategies to bridge this gap [S28][S29][S30].
The translation plugin has already been deployed on a large number of websites, demonstrating wide-scale impact.
Speakers: Swati Sharma, Shailendra Pal Singh
Over 400 websites, 24 M+ inferences, 1.5 M glossaries created (Swati Sharma) Plugin already deployed on 500+ sites, addressing accessibility (Shailendra Pal Singh)
Swati cites integration on more than 400 sites with millions of inferences and glossaries [96-99], while Shailendra mentions the plugin is already sitting on top of more than 500 websites [9]. Both indicate extensive adoption.
The plugin works across all pages of a website without needing to embed code on each page.
Speakers: Shailendra Pal Singh, Swati Sharma
Plugin works across all pages without re‑adding code (Shailendra Pal Singh) One‑liner lightweight integration enabling multilingual sites (Swati Sharma)
Shailendra asks whether the language selection persists when navigating to other pages [73-78]; Swati confirms that the code automatically applies to every page, so no repeated embedding is required [79-86].
POLICY CONTEXT (KNOWLEDGE BASE)
Demonstrations during the ElevenLabs Voice AI Session confirmed that the translation plugin automatically applies site-wide without per-page code insertion, illustrating a best-practice model for seamless multilingual integration on web platforms [S21][S22].
Similar Viewpoints
Both emphasize that linguistic barriers prevent citizens from accessing digital services and that eliminating these barriers is essential for inclusive digital transformation [21-28][7-13].
Speakers: Swati Sharma, Shailendra Pal Singh
Language divide hampers access to services (Swati Sharma) Need to break language barrier across India (Shailendra Pal Singh)
Both highlight the extensive deployment of the translation plugin across hundreds of websites, indicating a broad reach and impact [96-99][9].
Speakers: Swati Sharma, Shailendra Pal Singh
Over 400 websites, 24 M+ inferences, 1.5 M glossaries created (Swati Sharma) Plugin already deployed on 500+ sites, addressing accessibility (Shailendra Pal Singh)
Both convey that the solution is technically simple to integrate and operates site‑wide without requiring developers to modify each page or backend architecture [41-45][73-78][79-86].
Speakers: Swati Sharma, Shailendra Pal Singh
One‑liner lightweight integration enabling multilingual sites (Swati Sharma) Plugin works across all pages without re‑adding code (Shailendra Pal Singh)
Both discuss the importance of maintaining domain‑specific glossaries and fine‑tuning AI models to ensure accurate, context‑aware translations [330-336][326-329].
Speakers: Swati Sharma, Audience
Custom glossaries are ingested per client; models are fine‑tuned per domain (Swati Sharma) Need for domain‑specific glossary maintenance and fine‑tuning (Audience)
Unexpected Consensus
Setting the default language of a website based on the visitor’s region.
Speakers: Audience, Swati Sharma
Need for domain‑specific glossary maintenance and fine‑tuning (Audience) Possible to set default language based on user region; requires feasibility assessment (Swati Sharma)
The audience asked whether the plugin could automatically display different default languages for users from different regions [312-317]; Swati responded that it is technically feasible though it needs further evaluation [318-324]. This alignment on a feature not previously highlighted was unexpected.
Overall Assessment

The speakers show strong consensus on the need to eliminate linguistic barriers, the wide adoption and technical simplicity of the Bhashni translation plugin, and its ability to function site‑wide without extensive development effort. Additional agreement emerges around advanced customization such as glossaries and potential regional default language settings.

High consensus on core objectives (language inclusion, scalability, ease of integration) which reinforces the viability of multilingual digital infrastructure as a public good. The limited but notable unexpected consensus on regional default language indicates emerging interest in further personalization.

Differences
Different Viewpoints
Scale of plugin deployment
Speakers: Shailendra Pal Singh, Swati Sharma
Plugin already deployed on 500+ sites, addressing accessibility (Shailendra Pal Singh) Over 400 websites, 24 M+ inferences, 1.5 M glossaries created (Swati Sharma)
Shailendra states the plugin is already sitting on more than 500 websites [9], while Swati later reports that it has been integrated on more than 400 websites [96-99], indicating a discrepancy in reported deployment numbers.
Feasibility and implementation of region‑based default language selection
Speakers: Audience, Swati Sharma
Need for domain‑specific glossary maintenance and fine‑tuning (Audience) Possible to set default language based on user region; requires feasibility assessment (Swati Sharma)
An audience member asks whether the default language can automatically change based on the visitor’s region [312-317]. Swati replies that there is no technical barrier but the use case needs to be examined before implementation [318-324], showing a difference between the expectation of immediate capability and the need for further assessment.
Unexpected Differences
Numerical inconsistency in reported deployment figures
Speakers: Shailendra Pal Singh, Swati Sharma
Plugin already deployed on 500+ sites, addressing accessibility (Shailendra Pal Singh) Over 400 websites, 24 M+ inferences, 1.5 M glossaries created (Swati Sharma)
The difference between “more than 500” [9] and “more than 400” [96] is larger than a simple rounding variance and was not addressed in the discussion, making it an unexpected point of disagreement regarding the scale of impact.
Overall Assessment

The discussion shows strong consensus on the need to eliminate language barriers and on the technical promise of the Bhashni/Pashni translation plugin. Disagreements are limited to quantitative reporting of deployment scale and the readiness of advanced features such as region‑based default language selection, which require further feasibility work.

Low – most participants align on goals and core solution; the few disagreements are technical or factual rather than ideological, implying smooth collaborative progress toward multilingual digital inclusion.

Partial Agreements
Both speakers agree that linguistic barriers prevent large segments of the Indian population from accessing digital services and that the translation plugin is a key solution to this problem [7-13][21-28].
Speakers: Swati Sharma, Shailendra Pal Singh
Need to break language barrier across India (Shailendra Pal Singh) Language divide hampers access to services (Swati Sharma)
Shailendra raises a concern about persistence of translation across pages [73-78]; Swati confirms that the plugin automatically applies to all pages without additional code [79-86], showing agreement on the site‑wide functionality while initially questioning it.
Speakers: Swailendra Pal Singh, Swati Sharma
Plugin works across all pages without re‑adding code (Shailendra Pal Singh) One‑liner lightweight integration enabling multilingual sites (Swati Sharma)
The audience asks how glossaries are maintained and used for model fine‑tuning [326-329]; Swati explains that glossaries are customized per client and that models are fine‑tuned for specific domains [330-336], indicating shared understanding of the need and the approach.
Speakers: Audience, Swati Sharma
Need for domain‑specific glossary maintenance and fine‑tuning (Audience) Custom glossaries are ingested per client; models are fine‑tuned per domain (Swati Sharma)
Takeaways
Key takeaways
India faces a massive language barrier; most digital content is only in English, limiting access for millions. The Bhashni (Bhashni) Translation Plugin offers a lightweight, one‑line integration that can render any website in all 22 Indian scheduled languages (expanding to 36 Indian + 35 international languages). The plugin is already deployed on 500+ websites, with over 400 live integrations, generating 24 M+ translation inferences and creating 1.5 M+ domain‑specific glossaries. Technical capabilities include DBM compliance, automatic multilingual support across all pages, source‑target language flexibility, skip‑translation class, custom language ordering, portal handling without page reload, dynamic‑content batching, URL redirection, speech‑button activation, and future accessibility bar with TTS/screen‑reader. Glossaries are essential for domain‑specific accuracy, handling abbreviations, transliterations, and contextual nuances; they are customized per client and can be used to fine‑tune models. Future roadmap: expand language coverage, automate glossary uploads, and add an accessibility bar with text‑to‑speech and screen‑reader features. Private and commercial entities can use the plugin through separate agreements; support is available at the Bhashni Pavilion. Regional default‑language customization is conceptually feasible but requires further feasibility assessment.
Resolutions and action items
Swati Sharma will direct interested private/commercial stakeholders to the Bhashni Pavilion for partnership agreements. The team will evaluate the feasibility of setting default languages based on user region and report back. Roadmap items will be pursued: expand language support, implement automated glossary upload, and develop the accessibility bar with TTS/screen‑reader. Continue fine‑tuning models per domain using client‑specific glossaries.
Unresolved issues
Exact technical implementation and timeline for region‑based default language selection remain undecided. Details of licensing, cost structure, and contractual terms for commercial/private use were not clarified. Ongoing process for maintaining and updating domain‑specific glossaries over time was not fully defined. No concrete decision on how quickly the proposed future enhancements (language expansion, automated glossary, accessibility bar) will be delivered.
Suggested compromises
Providing a one‑liner, lightweight plugin avoids the need for full website redevelopment while delivering multilingual capability. Inclusion of a “skip‑translation” CSS class allows selective translation, balancing full multilingual coverage with content that must remain unchanged. Allowing portals to display a limited set of languages (e.g., 3‑4) while encouraging the full 22‑language set accommodates UI simplicity and inclusivity. Supporting direct source‑to‑target translation eliminates the need for English as an intermediate language, simplifying workflows for sites originally built in regional languages.
Thought Provoking Comments
We are a country of 1.4 billion people… but whenever we go online, everything is available only in one language, mainly English.
Frames the fundamental problem of linguistic exclusion in India, turning a statistical fact into a compelling narrative that underpins the entire discussion.
Sets the agenda for the session, prompting all subsequent speakers to position their solutions as responses to this systemic barrier.
Speaker: Swati Sharma
A farmer had to travel 40 km just to find someone who could help him fill the PM Kisan Samman Nidhi form because the form was only in English.
Provides a concrete, relatable example that illustrates the real‑world consequences of the language divide, moving the conversation from abstract statistics to human impact.
Deepens audience empathy and validates the urgency of the proposed technology, leading to more detailed questions about implementation and reach.
Speaker: Swati Sharma
Imagine a scenario: I’m from the north, living in Maharashtra. I don’t know English or Marathi, yet I need to understand state policies and guidelines in my own language.
Expands the problem space by highlighting intra‑national mobility and multilingual needs, showing that the issue isn’t just rural‑urban but also cross‑regional.
Broadens the discussion to cover multilingual support for migrants and travelers, prompting the demo of the plugin’s ability to translate any website instantly.
Speaker: Shailendra Pal Singh
We are not just providing language as a feature. We are providing language as an infrastructure, the foundation for digital inclusion.
Elevates the solution from a product add‑on to a systemic layer, reframing the conversation around policy and long‑term ecosystem change.
Shifts the tone from a technical demo to a strategic vision, encouraging participants to think about scalability, standards, and government adoption.
Speaker: Swati Sharma
Users are not looking for literal accuracy; they need to understand the intent and context. That’s why we built glossaries to control domain‑specific terminology.
Introduces the nuanced idea that translation quality is context‑driven, not merely word‑for‑word, and that glossaries are essential for cultural and domain fidelity.
Triggers a deeper technical discussion about glossaries, leading to audience questions on how they are managed, fine‑tuned, and applied across domains.
Speaker: Swati Sharma
Can the plugin be used by private or commercial entities, not just government bodies?
Challenges the implicit assumption that the solution is solely a public‑sector tool, opening the conversation to market adoption and sustainability.
Prompted Swati to explain the separate collaboration agreements, signaling openness to commercial use and expanding the potential user base.
Speaker: Audience (question)
Can the default language change automatically based on the visitor’s region, e.g., Hindi for Delhi and Marathi for Maharashtra?
Raises a sophisticated use‑case about dynamic, location‑aware language selection, pushing the product’s capabilities beyond static settings.
Swati’s affirmative yet cautious response highlighted future development possibilities and encouraged participants to envision more personalized multilingual experiences.
Speaker: Audience (question)
We try to fine‑tune models with domain‑specific glossaries, but it requires careful classification and a long process.
Acknowledges the technical complexity behind customizing AI models for diverse Indian languages, revealing the depth of work beyond the plug‑and‑play demo.
Adds credibility to the solution, informs the audience about realistic timelines, and sets expectations for collaborative effort in model refinement.
Speaker: Swati Sharma
Overall Assessment

The discussion was anchored by Swati’s framing of India’s massive linguistic divide, which established a compelling problem statement. Shailendra’s vivid scenario and the farmer anecdote turned abstract statistics into human stories, driving urgency. By positioning language as infrastructure and emphasizing context‑driven translation through glossaries, Swati shifted the conversation from a simple plugin demo to a strategic, ecosystem‑level solution. Audience questions about commercial applicability and region‑based defaults introduced new dimensions—market viability and personalization—prompting the speakers to acknowledge future work and broader adoption pathways. Collectively, these pivotal comments steered the dialogue from problem identification to technical depth, strategic vision, and practical expansion, shaping a nuanced, forward‑looking conversation.

Follow-up Questions
Can the Bhashini translation plugin be used for commercial or private sector websites, not just government-sponsored projects?
Clarifies licensing and applicability for private entities, expanding the tool’s reach beyond public sector.
Speaker: Audience
Can the default language of a website be automatically set based on the visitor’s region or location (e.g., Hindi for Delhi users, Marathi for Maharashtra users)?
Important for personalized user experience and regional relevance, requiring geolocation or user‑profile integration.
Speaker: Audience
How can domain‑specific glossaries be consistently maintained, updated, and incorporated into model training or fine‑tuning?
Ensures accurate terminology across varied sectors (e.g., health, agriculture) and improves translation quality.
Speaker: Audience
Are glossaries currently used to fine‑tune the translation models, or are they only applied as post‑processing lookup tables during inference?
Understanding this impacts model improvement strategies and resource allocation for customization.
Speaker: Audience
Research needed on scaling the plugin to support 36 additional Indian languages and 35 international languages while maintaining translation quality and performance.
Expanding language coverage raises challenges in data collection, model training, and evaluation.
Speaker: Swati Sharma
Develop and evaluate an automated glossary ingestion workflow via an onboarding portal to replace manual email‑based processes.
Automation could speed up deployments and reduce errors, but requires robust validation and version control.
Speaker: Swati Sharma
Integrate an accessibility bar offering text‑to‑speech and screen‑reader support into the plugin; assess usability and compliance impact.
Adds a layer of accessibility, aligning with inclusive design standards and potentially reaching users with visual impairments.
Speaker: Swati Sharma
Optimize translation of dynamic content (e.g., rapidly changing data on banking or hotel sites) by batching API calls; study performance trade‑offs.
Dynamic content can cause latency; research is needed to balance real‑time translation with system load.
Speaker: Swati Sharma
Improve automatic detection and skipping of mixed‑language segments to avoid incorrect translations; investigate detection algorithms.
Ensures that already‑translated or multilingual text isn’t re‑translated, preserving meaning and reducing errors.
Speaker: Swati Sharma

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Speaker 1 highlighting Cisco’s view that AI will be agentic and physical, but that the future will be built by humans who “confidently put AI to use” [1-2]. He then introduced Qualcomm CEO Cristiano Amon as a leader shaping wireless technology and edge AI [4-8].


Amon described the “next chapter of AI” as a shift from chat-based interfaces to pervasive agents that understand vision, speech and intent, fundamentally changing the human-computer interface [15-18][24]. He argued that smartphones, currently the central device, will be superseded by agents that can operate across phones, glasses, wearables and other form factors, making the agent the new platform core [25-28][35-38]. This transition creates a new value chain because agents can act autonomously on the internet, bypassing traditional OS and app constraints [29-34].


Amon emphasized that AI workloads will be distributed between cloud, near-edge and on-device, with each location handling tasks that require either instant response or broader context, rendering the cloud-vs-edge debate moot [65-71][74-77]. He illustrated the need for agents to be fast, relevant and friction-free, using smart-glass scenarios where visual, translation and payment requests must be answered instantly [78-90][92-95].


Looking ahead, Amon linked the rise of agents to the upcoming 6G era, where AI will be embedded in the telecom network itself, providing large-scale sensing and context for services such as autonomous driving, drone detection and industrial automation [127-134][136-143]. He noted that this AI-enabled network will generate massive private data streams that far exceed publicly available internet data, further enriching personalized models [96-99]. Amon highlighted India’s unique position, citing its high mobile data consumption and manufacturing capabilities as a catalyst for adopting AI-driven devices and services across sectors like smart manufacturing, health, education and agriculture [149-166][168-170].


Qualcomm positions itself as a “unique semiconductor company” capable of delivering chips from sub-2 mW wearables to 2 kW data-center processors, thereby supporting the entire AI ecosystem [103-105]. The company stresses its role is to enable partners and industries rather than own all innovation, aiming to democratize AI for global welfare [172-174]. In sum, the discussion presented a vision where agentic AI, distributed across devices and a 6G-powered network, will transform every industry, with Qualcomm and India poised to drive that transformation [106-108][150-151].


Keypoints


AI agents will become the primary human-computer interface, supplanting traditional OSs and apps and spanning many form-factors (phones, glasses, wearables). Amon explains that the “smartphone today is at the center… but now that’s going to get replaced by an agent” and that “the agent is going to be at the very center” with access from any device [24-27][30-36][37-40][106-108].


The future of AI computation is a seamless blend of cloud, edge, and on-device processing, rendering the cloud-vs-edge debate moot. He notes the “big debate… about cloud and edge… actually it does not matter,” and describes how “intelligence is going to be incredibly distributed across the cloud, across the near edge, the network… and on-device” to meet latency and context needs [65-69][70-78][79-84][90-94].


6G networks will embed AI at scale, turning the telecom infrastructure into a sensing and decision-making platform that feeds agents with contextual data. The speaker outlines that “6G is going to provide… faster speed, lower latency… but the biggest part… is AI… the network will sense everything around you” and will support services such as autonomous driving, drone detection, and industry-wide AI-enabled applications [127-133][134-141][145-146].


Qualcomm positions itself as the hardware and software catalyst for this AI-driven transformation, leveraging its ability to produce chips from sub-2 mW wearables to 2 kW data-center processors. He highlights that Qualcomm is “a very unique semiconductor company… working on chips from sub-2 milliwatts to… 2,000 watts” and that “the agents are going to be at the center… replacing a lot of the OSs and applications” [103-106][104-105].


India is presented as a key market and innovation hub where AI-enabled devices, 6G, and new industry use-cases (smart manufacturing, cities, health, education, agriculture) can drive massive economic and social impact. Amon points to “the incredible opportunity for India” citing its high mobile data consumption and linking AI to “smart manufacturing… smart cities… healthcare… education… agriculture” and the broader goal of democratizing technology [148-166][167-171].


Overall purpose:


The discussion serves to articulate Qualcomm’s strategic vision for the “next chapter of AI,” emphasizing the rise of agentic AI, the convergence of edge and cloud computing, the role of upcoming 6G networks, and the company’s unique capability to supply the required hardware. It also aims to rally stakeholders-especially in India-around the economic and societal opportunities that this AI-centric future will unlock.


Tone:


The speaker maintains an upbeat, confident, and forward-looking tone throughout, repeatedly using words like “excited,” “incredible,” and “opportunity.” The tone remains consistently optimistic and visionary from the opening remarks through the technical exposition and concluding call to action, without any noticeable shift to a more cautionary or critical stance.


Speakers

Speaker 1


– Role/Title: Event moderator/host (introduces the keynote speaker) [S1][S3]


– Area of Expertise:


Cristiano Amon


– Role/Title: President and Chief Executive Officer, Qualcomm [S5][S6]


– Area of Expertise: Wireless technology, intelligent computing, artificial intelligence, semiconductor industry


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Speaker 1 opened the session by framing Cisco’s view that artificial intelligence is evolving into both “agentic” and “physical” forms, but emphasized that the future will be built by people who can “confidently put AI to use” rather than by AI itself [1-2][2-3]. He then introduced Qualcomm chief executive Cristiano Amon, describing him as a leader who has been “at the forefront of shaping the future of wireless technology and intelligent computing” and noting Qualcomm’s role in delivering AI that runs not only in the cloud but also “in your pocket, in your car, in the factory floors” [4-8].


Amon declared that the industry is entering “the next chapter of AI”, a phase in which artificial intelligence moves beyond isolated chat-box interactions to become a pervasive, context-aware agent that can interpret vision, speech and intent [12-24]. He linked this shift to a fundamental change in the human-computer interface: instead of learning keyboards or touch gestures, users will interact with systems that “understand what we see, what we hear, what we say, what we write” [24-36]. Amon also highlighted the rise of “physical AI”, where models are trained on sensor-level data (e.g., radar, inertial, environmental sensors) and embedded across all classes of devices, from wearables to data-center chips [70-73].


Central to this vision is the claim that the smartphone, today the “centre of everything we do”, will be superseded by an “agent” that serves as the new platform for user intent [25-38]. Because the agent can operate across phones, smart glasses, wearables or any pendant, the traditional value chain built around operating systems and app stores will be displaced; the agent will “go to the internet and do things… you’re no longer bound by constructs of your hardware or your apps” [29-34][106-108]. This re-architecting creates a fresh ecosystem in which developers target the agent rather than a specific OS.


Amon further argued that the long-standing cloud-versus-edge debate is misplaced, insisting that “it does not matter” whether a task runs in the cloud or at the edge because intelligence will be “incredibly distributed across the cloud, across the near edge, the network… and on-device” to satisfy latency and contextual needs [65-84]. He illustrated the model with a smart-glass scenario: a user might ask the glasses to identify a person, translate speech or complete a payment, and the system must respond “fast… relevant… with no friction”, with some processing happening locally and the rest transparently in the cloud [78-95].


Looking ahead, Amon warned that the data generated by such pervasive sensing-continuous visual streams from smart glasses, radar feeds, etc.-will dwarf the publicly available internet data that currently trains models, providing “an incredible amount of data” for personalised AI [96-99]. He also noted that the same AI-driven shift is already reshaping robotics and industrial automation, mirroring the earlier industrial revolution [95-98].


Qualcomm positions itself as uniquely equipped to power this transformation, boasting a semiconductor portfolio that spans “sub-2 milliwatts… to a smart earbud… to 2 000 watts per chip on the data centre” [103-106]. The company stresses its role as an enabler rather than a sole innovator, stating that “it is not the job of one company to be responsible for all the innovation” and that its aim is to “democratise” AI for the benefit of global welfare [170-174].


While discussing the broader context, Amon connected the rise of agents to the forthcoming 6G era. Although 6G will deliver higher speeds, lower latency and broader coverage, its defining characteristic will be the integration of AI directly into the telecom network, turning it into a large-scale sensing platform that can map environments, support autonomous-driving, drone detection and other industrial services [127-146].


India was highlighted as a particularly fertile market for this AI-driven shift. Amon noted the country’s “incredible opportunity” given its status as “one of the largest data consumption per user in mobile devices in the world” and its emerging position as a global manufacturing hub [148-164]. He linked the AI vision to the Summit’s goals of large-scale industrialisation, citing smart manufacturing, smart-city infrastructure, AI-enhanced healthcare, personalised education and precision agriculture as flagship use-cases [150-158].


Both speakers stress a human-centric view of AI. Speaker 1 emphasizes that people will build the future, and Amon describes agents as powerful tools that serve human intent, reinforcing the principle that AI must amplify capability rather than replace humanity [2][170-174].


The presentation concluded with forward-looking questions about how the agent-centric model will be operationalised for developers and users; the technical challenges of heterogeneous AI workload distribution; privacy and security safeguards for continuous personal data collection; standards needed for an AI-enabled 6G network; and concrete road-maps for India’s sectoral adoption of AI agents. These queries underscore the need for collaborative standards-setting, robust governance frameworks and clear commercial pathways to realise the vision articulated throughout the session.


Session transcriptComplete transcript of the session
Speaker 1

That was really an interesting session by CEO Cisco, highlighting the agentic AI, the role of agentic AI, as well as the physical AI and the current scenario. And also the last line was really an assuring line saying that the future will not be built by AI, but by humans who can confidently put AI to use. Well, ladies and gentlemen, moving on. Now it’s my honor to introduce a leader who’s been at the forefront of shaping the future of wireless technology and intelligent computing. Mr. Cristiano Amon is the president and chief executive officer of Qualcomm, a company that has defined and continues to redefine the global compute connectivity and AI landscape. And well, AI doesn’t just live in the cloud, it runs in your pocket, in your car, in the factory floors.

And Mr. Amon is leading Qualcomm’s push to bring powerful AI processing to the edge. enabling billions of devices to think locally and act intelligently. Ladies and gentlemen, it’s my pleasure to invite Mr. Amon, President and CEO of Qualcomm, to the stage. Please give a round of applause.

Cristiano Amon

Good afternoon, everyone. Very, very happy and privileged to be here. I’m incredibly excited and energized about what’s happening here in India with AI and I think what’s happening with AI in general. What I’d like to talk to you today is about the next chapter of AI. And this is something that’s very near and dear to Qualcomm. We’ve been talking about this because I think we’re really entering now the next phase of AI. As AI gets developed, it’s going to be part of everything that we do. And especially… the interaction that we have with computers and with digital… So intelligent is now shifting for something that we kind of started and we all experience going to, you know, a chat box and asking questions into something that is going to be all around us and everywhere all the time, especially with the devices.

I actually love the presentation right before from my friend Jitu from Cisco when he talked about the traffic change from chat box to agents. And this is important. You know, I’ve been often talking about this, how we should be thinking about AI in a much broader sense. And it’s easier for a company like Qualcomm to talk about this because we build a lot of the chips that go into devices where the humans are. So as you create AI in the data center and you train and create those models, all this data. And you deploy this, you’re starting to see that this gets utilized in different ways. One fundamental thing that AI is doing for us.

it is changing the human computer interface because we don’t have to now learn how to use a computer if you know i’ve been uh often talking about this in different presentations we learn how to use an s2 keyboard and we still use that on a laptop then we use like to touch a screen but now the ai understands what we see what we hear what we say what we write so in itself it’s changing computers it’s changing the devices we interact with and uh it’s becoming a pervasive technology that is going to be everywhere and i think that’s the mission i think of qualcomm when i think about uh what we’re going to do is the same way that what we did with mobile communications and the creation of of the computer that fits in the palm of your hand is the ability to take that intelligence everywhere so we’re going to be creating a number of important shifts in the industry and i want to start talking about the mobile industry we may have had the privilege as a company to be part of every single transition of wireless technologies and let’s talk today I’m going to talk about the next one that is coming as well and what we saw with the transition of wireless technology that fundamentally at every generation of wireless you saw big shifts not only in devices and companies and because of the transition especially for example when you went to the ability to have a phone that you carry with you all the way to connect the phone or the internet all of a sudden that phone became a computer and it started to drive the future of the internet like a country like India that leapfrogged I think the internet and went straight to the mobile internet and that’s going to be true again when you think about AI for example in the mobile ecosystem AI is constantly changing and it’s changing and it’s changing and it’s changing and it’s changing going to fundamentally change how we think about the mobile device All of you today, and me included, I think we look at our smartphone, our inseparable device, most of our digital life is.

And the smartphone today is at the center of everything that we do. But now that’s going to get replaced by an agent. Now, when you think about the entire value chain that got created, for example, for the mobile industry, there’s an enormous amount of value on things like OSs and application stores. And that becomes like the platform when you’re going to develop an application that you’re going to do different things into the platform. An agent that now understands human intentions because, you know, you just need to tell him what you want. Or he’s going to see what you see and make a decision for you, assuming you will authorize it. it. When that happens, that’s where the value is because then the agent is free.

It can go to the internet and do things. It can go to your phone and do things. And you’re no longer bound by constructs of your hardware or your apps in the application. So as a result, we expect the AI is going to have a fundamental shift in the mobile industry where the agent is going to be at the very center. And as the agent is at the very center, everything surrounds the agent. You can access the agent from your mobile phone, but you can also access the agent from your glasses or for a pendant or for anything that you wear. And I think we’re going to look at the mobile ecosystem right now, not only as a single device experience, but you’re going to connect to agents across multiple types of devices.

And I think that’s incredibly exciting. And that’s not only unique to what you’re going to see in consumers. That’s going to happen also with things, because you can also have create AI that’s going to get trained on different things. on physical signals, like physical AI, on sensor data, and you’re going to deploy that in every computer. So what’s exciting about AI, it’s going to very quickly evolve for something you go to a browser and you ask a question. And I think, as my colleague from Cisco said, it’s got train and all the public available data on the Internet. You’re now going to go to a different type of AI experience that’s going to be the fundamental software that is going to run in all the devices around us and how you’re going to have interaction with the devices.

So I also want to basically, you know, as we think about this future, I just want to give you an example. What we saw across the industry is workloads or use cases have shifted. Devices didn’t go anywhere, but their workloads shifted. We used to do a lot of things in the early days of the Internet on your laptop. And forget. For example, e -commerce, you will do it on your laptop. Now, most of the e -commerce in the world is done on a phone. Tomorrow, or it could be like as early as, you know, within the end of this year, as you start to see the proliferation of glasses. If you have a glass that has agents, is connected to the Internet, has camera on those smart glasses, the glass see what you see.

You can just look at something and say, I’d like to buy this. What is, you know, can you check this? For example, check this on Flipkart. Just buy it for me. I’d like to buy this. Integration of payment system. You got a bill, say, pay this, notify me when I’m done, and so forth. So I think we’re going to see this fundamental change of devices. But that’s also going to be true about the revolution that’s happening in robotics and the revolution that exactly happened on industrials. So that’s an incredible opportunity. And we have been incredible. Incredibly focused as a company to basically drive that future of computing. There’s also a big debate, which I believe is the wrong way to look into that, which is about cloud and edge.

There’s a lot of debate about, oh, this is going to be running on the cloud. This is going to be running on the edge. And actually, it does not matter. Think about your device today. Your smartphone today has incredible amount of processing power, and there’s a number of different things that run in your smartphone. If you put it on airplane mode, you probably don’t use it. You just put it back and wait until you get connectivity again. It’s the most cloud -connected device because those things work as a one. And you’re going to have now intelligence that’s going to be incredibly distributed across the cloud, across the near edge, the network in itself, in and on device.

And it’s all going to work similar. There are going to be things that you’re going to be able to do on the device because they’re… They require an instant response or require unique context, unique information that is relevant to you. Something is going to do on the cloud and they’re both going to be growing and it’s going to be transforming how we think about computers. So I like to provide the simple, I think, a description. Let’s say we are all using agents and you’re going to pick the agents that you like and the agents to be useful. It needs to be fast. It needs to be relevant for you. Let’s say, go back to the example I provided on the glasses.

And you have those smart glasses and you’re walking around and you have a camera. Then all of a sudden you see somebody and you ask this glass, like it’s your friend next to you and say, who is this person? And you want to get a response. This is so and so. Or you’re going to say, can you translate this for me? What is this? Can you pay this for me? You want to, this thing has to be similar. Similar is no friction. So certain things are going to be done on your device and the thing’s going to be on the cloud. It’s going to be completely transparent to you. But the interesting thing is those agents, for them to be very useful, they needed to be contextually aware of what is relevant to you.

So over time, the agent I’m going to be using, the agent you’re going to be using, they need to be relevant to me. So you’re going to have a lot of things that are going to be being processed and understood about you. So much so that I believe that in the end game, I think it was said in the prior presentation from Cisco that all this available data that is publicly on the Internet that you train models, it’s a fraction of the data that is going to be generated. If you have, for example, a glass of a camera that sees everything that you see, try to annotate the image, get information about the image and the context, reads what you read.

And so forth, that is an incredible amount of data, and that’s going to be providing a lot of important context for those models that are going to be relevant to you. That is the future, and it’s an incredible transformation. It’s going to transform every industry. No industry is immune to this. And I think what we’re doing at Qualcomm is really creating the future hardware and software that will help enable this future across all the devices. We’re a very unique semiconductor company. I think we’re probably one of the few companies that can be working on chips from sub -2 milliwatts to a smart earbud that you’re going to wear all the way to now 2 ,000 watts per chip on the data center.

But I think that’s the incredible future that AI is going to transform every single computer. And the agents are going to be at the center of the experience. It’s going to replace a lot of the OSs and applications. And that is the new future of technology, including the future of mobility. And that’s why we’re incredibly excited about this. And with that, I want to talk about something that is happening, which is about the next generation of wireless technologies. I would like to provide an example from the past. When you think about telecom networks, and I think we’re probably one of the, you know, American telecom companies that really focus on the evolution of cellular technology.

When you think about the evolution of this sector, when this all started, it was about providing a telephone. I think all of us was an incredible thing. You have a twisted copper pair to get to your home. You pick up. You get a dial tone. You dial. And eventually, you could dial. Anybody in the world of a telephone. Even how cellular started was about making sure all of us had the ability to carry a telephone. That was 2G, that you can call everyone. That’s different today. Now you have a very high performance broadband network for data. Voice is just one application in the many applications that you do with the network. It fundamentally changed the nature of the infrastructure.

The equipment was different. The use case is different. We’re heading to the next big transformation of the telecom sector. So 6G is going to provide an evolution of connectivity, faster speed, lower latency, higher coverage. But that’s not the story. That’s just a piece of the story. It’s just continue to improve the connectivity. The biggest part of 6G is AI, like I said before, is now going to come to the telecom network. And that becomes a large scale. 6G. AI network that is processing and get trained on all of the signals that happens at the network and providing new capabilities. One of the biggest features of 6G is the network, is the sensing network at scale.

I’m going to give an example. The network not only will provide a connectivity between your device and the Internet, but will sense everything that’s around you. We’ll use techniques that you see today in autonomous driving cars, like radars, as an example, to detect your environment. It’s going to provide a map of everything that is happening at scale. And you’re going to have completely different type of services for different industries. It will provide context for your agents. Very important. And the network will have that role. It will provide traffic management systems and some of the use cases that are going to be part of full self -driving cars. It will do drone detection and manage the traffic control.

Off the economy, there’s going to be an aerial in the wide area network and much more. because AI is also going to the network. It’s going to be one of the biggest transitions I think we have, as big as going from voice to data, and it’s all going to be part of this future of AI. And I just want to now make another parallel, I think, to the presentation from my colleague from Cisco. It puts a fine point on the network that needs to be built, the capability of the infrastructure, the security and trust, but that is an incredible future with technology. And as I get to the end of the presentation, I want to highlight that India has an incredible opportunity with this transformation.

We have seen that those big shifts in technology creates opportunity, change players. It changed, I think, the role of different countries as they provide globally. It’s a global scale for the technology, and that’s an incredible opportunity for India. I look of what happened in mobile in India, and one of the largest data consumption per user in mobile devices in the world is in India. The whole Internet is mobile. When you think about the potential and all of the things that I just discussed about how AI is going to change everything, creates new device, new experiences, new services, that becomes a massive opportunity. And when I look at the ambitions that were set by the AI Summit, I’m going to provide just some examples.

Those are just examples. It can be very broader, but I just want to connect with some of the ambitions of the Summit. There is a process of jumping into a large -scale industrialization. India is becoming a global manufacturing hub as well. And with AI, you… You go from the very beginning. with smart manufacturing and automation with incredible change that is happening in this sector enabled by those technologies. Same thing with smart cities, the ability to continue to evolve the infrastructure, the ability to use AI to increase the scale, the reach, the access for healthcare. How you change education. Those are incredibly powerful learning tools. The ability to actually use some of those technologies to empower people with information and you’re going to have an ongoing learning experience.

Think about those agents with you all the time answering questions, telling you how to do things, especially when you think of context, for example, of those new devices such as smart glasses. And it can fundamentally change industries, for example, such as agriculture. Right. Right. Right. Just a few examples of the potential of connecting this technology with everything, I think, that is going on in India. It’s an incredible and exciting future enabled by AI. And really, it’s about meeting the ambition of democratizing this technology for everyone and actually have an important role in increasing the global welfare. And, you know, as a company that has always been focused on enabling our partners and other industries to innovate, I think the history of Qualcomm, we never believe is the job of one company to be responsible for all the innovation.

It’s really to enable many industries and partner. We’re incredibly excited to play a very small part on this mission. Thank you very much for the opportunity to talk with all of you and

Related ResourcesKnowledge base sources related to the discussion topics (11)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Cisco’s view that AI is evolving into “agentic” and “physical” forms and that the future will be built by humans who can confidently put AI to use, not by AI itself.”

The knowledge base notes that the Cisco session highlighted agentic and physical AI and concluded with the line that the future will not be built by AI but by humans who can confidently put AI to use [S6].

Additional Contextmedium

“Qualcomm’s role is to deliver AI that runs in the cloud, in your pocket, in your car, and on factory floors.”

A source describes Qualcomm as an enabler that empowers partners and industries to innovate with AI across many domains, supporting the claim of broad AI deployment [S4].

Confirmedhigh

“The industry is entering a “next chapter of AI” where AI moves beyond isolated chat‑box interactions to become a pervasive, context‑aware agent that can interpret vision, speech and intent.”

Amon’s prediction that AI will fundamentally change how we interact with computers, enabling new interfaces and applications, aligns with this description of a next-chapter, context-aware AI [S5].

Confirmedmedium

“The rise of “physical AI”, with models trained on sensor‑level data (radar, inertial, environmental) and embedded across wearables to data‑center chips.”

The session explicitly highlighted “physical AI” as a key trend, matching the report’s description of sensor-level model training and widespread embedding [S6].

Additional Contextmedium

“Future user interfaces will shift from learning keyboards or touch gestures to systems that understand what we see, hear, say, and write.”

A workshop note describes a paradigm shift toward dynamically created interfaces tailored to user needs and personas, providing nuance to the claim about new multimodal interaction models [S34].

Additional Contextlow

“The transition toward an “agentic web” where AI agents become the primary platform for user intent, reducing reliance on traditional OS and app stores.”

Discussion of the “agentic web” notes that AI will increasingly serve as the core mechanism for delivering services, complementing the report’s vision of agents superseding conventional platforms [S44].

External Sources (45)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned And Mr. Amon is leading Qualcomm’s…
S5
Lift-off for Tech Interdependence? / DAVOS 2025 — – Cristiano Amon: President and CEO at Qualcomm Cristiano Amon: What I’ll say is, technology is moving very, very fast…
S6
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — This discussion features Cristiano Amon, President and CEO of Qualcomm, presenting his vision for the next chapter of ar…
S7
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Alaa Abdulaal: So hello, everyone. I think I was honored to join the session. And I have seen a lot of amazing conver…
S8
AI Governance Dialogue: Presidential address — ### Human-Centered Development H.E. Mr. Alar Karis: Honourable leaders, excellencies, distinguished delegates. It is tr…
S9
WS #110 AI Innovation Responsible Development Ethical Imperatives — Dr Zhang Xiao: Thank you everyone. I’m glad to be involved in this interesting discussion and I have three points to sha…
S10
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Distribute compute requirements across devices, edge cloud, and data centers rather than concentrating everything in cen…
S11
Telecommunications infrastructure — Network operators increasingly rely on AI for a wide range of tasks, fromnetwork planning(e.g. using algorithms to ident…
S12
The Global Power Shift India’s Rise in AI & Semiconductors — Joining us is Professor Vivek Kumar Singh, Senior… advisor on science and technology at NITI IO. Professor Singh plays…
S13
From KW to GW Scaling the Infrastructure of the Global AI Economy — This honest assessment of India’s position provides crucial context for understanding the scale of opportunity – if Indi…
S14
Inclusive AI Starts with People Not Just Algorithms — Combine human intelligence with artificial intelligence in a coexistence model rather than viewing them as competing for…
S15
Welcome Address — “How to make AI machine -centric and human -centric?”[33]. “Friends, the future of work will be inclusive, trusted, and …
S16
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — Moreover, while AI and new technologies have significant potential in agriculture, it is crucial to understand that they…
S17
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Explanation:It was unexpected to see both regulatory leaders emphasizing that AI development should not be confined to I…
S18
Comprehensive Report: Preventing Jobless Growth in the Age of AI — You know, first and foremost, this is a technology which is very probabilistic in nature. It is unlike traditional softw…
S19
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — “An agent that now understands human intentions because, you know, you just need to tell him what you want.”[32]. “You c…
S20
DiploNews – Issue 329 – 1 August 2017 — ​The field of artificial intelligence (AI) has seen significant advances over the past few years, in areas such as smart…
S21
Multistakeholder Partnerships for Thriving AI Ecosystems — Both speakers emphasize that technology must be made accessible and available to all, not concentrated in the hands of a…
S22
Advancing Scientific AI with Safety Ethics and Responsibility — While both speakers support context-appropriate approaches, there’s an implicit tension between Speaker 1’s emphasis on …
S23
Advancing Scientific AI with Safety Ethics and Responsibility — Explanation:While both speakers support context-appropriate approaches, there’s an implicit tension between Speaker 1’s …
S24
Global AI Policy Framework: International Cooperation and Historical Perspectives — The discussion revealed both shared concerns and different approaches to addressing them. Speakers generally agreed on t…
S25
Global Data Partnership Against Forced Labour: A Comprehensive Discussion Summary — However, notable differences in emphasis emerged between speakers. The primary tension was between technology-focused an…
S26
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — Low level of fundamental disagreement with moderate differences in implementation strategies. The speakers largely agree…
S27
Agentic AI in Focus Opportunities Risks and Governance — -Enterprise Guardrails and Risk Management: Panelists emphasized the critical importance of implementing robust safety m…
S28
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — “An agent that now understands human intentions because, you know, you just need to tell him what you want.”[32]. “You c…
S29
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — Evidence:But now that’s going to get replaced by an agent. there’s an enormous amount of value on things like OSs and ap…
S30
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Distribute compute requirements across devices, edge cloud, and data centers rather than concentrating everything in cen…
S31
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — All right, I’m just going to click through this. This is good. This is probably a good indication of why the edge matter…
S32
Telecommunications infrastructure — Network operators increasingly rely on AI for a wide range of tasks, fromnetwork planning(e.g. using algorithms to ident…
S33
Omnipresent Smart Wireless: Deploying Future Networks at Scale — H.E. Kyriacos Kokkinos:All right. I believe that we need to see these through the lenses of AI. One key difference… Yo…
S34
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — The moderator introduces Durga Malladi’s presentation by emphasizing how Qualcomm’s comprehensive approach spans from ed…
S35
AI for Good Technology That Empowers People — Thank you, Fred. And let me start by saying it’s an absolute pleasure to be sitting with fellow panelists and speakers w…
S36
From KW to GW Scaling the Infrastructure of the Global AI Economy — This honest assessment of India’s position provides crucial context for understanding the scale of opportunity – if Indi…
S37
AI 2.0 The Future of Learning in India — Discussion point:India as a global technology innovation hub
S38
Open Internet Inclusive AI Unlocking Innovation for All — Anandan highlights India’s strength in consumer AI applications, driven by its massive internet user base and specific m…
S39
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — This discussion focused on the intersection of artificial intelligence and human capabilities, particularly emphasizing …
S40
High Level Session 3: AI & the Future of Work — Joseph Gordon-Levitt: I get to go next. Cool. Thank you. Thanks for having me. Well, I’ll talk about, you asked, what ar…
S41
From brainwaves to breakthroughs: The future with brain-machine interfaces — – **Technical Capabilities**: All speakers agreed that brain-computer interface technology can successfully translate br…
S42
AI, smart cities, and the surveillance trade-off — The danger isn’t the technology itself, but the assumption that AI-driven solutions are politically neutral, that algori…
S43
Climate change and Technology implementation | IGF 2023 WS #570 — The artificial intelligence can highlight improved sensors that collect real-time environmental data, such as deforestat…
S44
The Future of the Internet: Navigating the Transition to an Agentic Web — And they are able to do that better and better today because of technology. And if AI is going to reach its true purpose…
S45
Challenging the status quo of AI security — Debora Comparin: Good afternoon, everyone. It’s really a pleasure to be here with you today. I will share with you some …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument154 words per minute185 words71 seconds
Argument 1
AI as a Human‑Centric Tool – the future will be built by humans who confidently use AI
EXPLANATION
Speaker 1 emphasized that AI should be viewed as a tool that amplifies human capability, stressing that the ultimate builders of the future are people who can harness AI responsibly. The message positions humans, not machines, at the core of technological progress.
EVIDENCE
Speaker 1 highlighted that the concluding remark of the Cisco session emphasized that the future will be built by humans who can confidently use AI, not by AI itself [2].
MAJOR DISCUSSION POINT
AI as a Human‑Centric Tool – the future will be built by humans who confidently use AI
AGREED WITH
Cristiano Amon
DISAGREED WITH
Cristiano Amon
C
Cristiano Amon
11 arguments163 words per minute3022 words1111 seconds
Argument 1
AI will become pervasive, running on edge devices and transforming how we interact with computers
EXPLANATION
Amon described AI moving beyond centralized clouds to operate directly on smartphones, cars, and factory equipment, making intelligence ubiquitous. This shift changes the human‑computer interface by allowing AI to interpret visual, auditory, and textual cues in real time.
EVIDENCE
Cristiano Amon described AI as moving beyond the cloud to run on devices such as smartphones, cars, and factory floors, noting that AI is becoming pervasive and reshaping human-computer interaction by understanding visual, auditory, and textual inputs, effectively changing the devices we use [24-25][41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Amon’s keynote notes AI moving beyond the cloud to run on smartphones, cars and factory floors, emphasizing pervasive edge AI [S4] and [S6].
MAJOR DISCUSSION POINT
AI will become pervasive, running on edge devices and transforming how we interact with computers
Argument 2
Agents will replace conventional OSs and app stores, becoming the central platform for user intent
EXPLANATION
He argued that the traditional stack of operating systems and app marketplaces will be superseded by intelligent agents that directly interpret human intentions. These agents become the new platform, creating value by acting autonomously across the internet and devices.
EVIDENCE
He explained that the traditional value chain of operating systems and app stores will be superseded by agents that understand human intent, acting as the new platform for applications, and that this shift will create new sources of value as agents become autonomous and can act on the internet and devices without being limited by hardware or app constraints [27-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from OS/app stores to intent-understanding agents is described, positioning agents as the new platform [S6] and [S4].
MAJOR DISCUSSION POINT
Agents will replace conventional OSs and app stores, becoming the central platform for user intent
DISAGREED WITH
Speaker 1
Argument 3
Multi‑device agents (phones, glasses, wearables) will provide seamless, context‑aware experiences
EXPLANATION
Amon highlighted that agents will be reachable from a variety of form factors—phones, smart glasses, pendants—delivering consistent, context‑aware services wherever the user is. He gave a concrete scenario where smart glasses recognize a product, initiate a purchase, and handle payment instantly.
EVIDENCE
Amon illustrated that agents will be accessible not only from phones but also from glasses, pendants, and other wearables, providing a seamless, context-aware experience across multiple form factors, and gave a concrete scenario where smart glasses recognize a product, initiate a purchase, and handle payment through integrated services [36-39][51-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multi-device agent access via phones, glasses, pendants and other wearables is detailed in the presentation [S4] and [S6].
MAJOR DISCUSSION POINT
Multi‑device agents (phones, glasses, wearables) will provide seamless, context‑aware experiences
Argument 4
The cloud/edge distinction is less relevant; intelligence will be distributed across device, edge, and cloud
EXPLANATION
He contended that the ongoing debate over cloud versus edge is misplaced because future AI workloads will be spread across the entire continuum—from on‑device processing to near‑edge and cloud—making the binary distinction obsolete.
EVIDENCE
He argued that the ongoing debate about cloud versus edge is misplaced because intelligence will be distributed across the cloud, near-edge, network, and on-device, making the distinction less relevant to future computing architectures [65-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The cloud versus edge debate is called misguided, with distributed intelligence across cloud, edge, network and devices [S4] and [S6].
MAJOR DISCUSSION POINT
The cloud/edge distinction is less relevant; intelligence will be distributed across device, edge, and cloud
Argument 5
Real‑time, context‑specific tasks stay on‑device while broader processing moves to the cloud, transparently to users
EXPLANATION
Amon explained that latency‑sensitive or highly personalized functions will remain on the device, whereas heavy, non‑real‑time processing will be offloaded to the cloud. This split will be invisible to users, ensuring fast and relevant interactions.
EVIDENCE
Amon further clarified that tasks requiring instant response or personal context will remain on the device, while larger-scale processing will occur in the cloud, and this split will be transparent to users, ensuring fast and relevant agent interactions [76-78][90-93].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
On-device instant response combined with cloud-backed processing is highlighted as transparent to users [S6].
MAJOR DISCUSSION POINT
Real‑time, context‑specific tasks stay on‑device while broader processing moves to the cloud, transparently to users
Argument 6
6G will go beyond speed, embedding AI into the network to provide sensing, context, and new services
EXPLANATION
He outlined that the next generation of wireless (6G) will not only deliver higher throughput and lower latency but will also integrate AI directly into the telecom fabric, turning the network itself into a large‑scale sensing and decision‑making platform.
EVIDENCE
He outlined that 6G will not only deliver higher speed, lower latency, and broader coverage but will also embed AI directly into the telecom network, turning it into a large-scale AI-enabled sensing platform that can process network signals for new capabilities [127-133].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
6G is portrayed as embedding AI into the telecom fabric, turning the network into a large-scale sensing platform [S4] and [S6].
MAJOR DISCUSSION POINT
6G will go beyond speed, embedding AI into the network to provide sensing, context, and new services
Argument 7
The AI‑powered network will support use cases like autonomous driving, drone detection, and industry‑wide analytics
EXPLANATION
Amon gave concrete examples of how an AI‑infused 6G network can leverage radar‑like sensing to map environments, enabling services such as traffic management, self‑driving car support, drone detection, and aerial wide‑area networking.
EVIDENCE
Amon gave examples of the AI-powered 6G network using radar-like techniques from autonomous vehicles to sense the environment, providing services such as traffic management, self-driving car support, drone detection, and aerial wide-area networking, thereby creating industry-wide analytics [136-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled 6G network use cases such as traffic management for autonomous vehicles and drone detection are cited [S6].
MAJOR DISCUSSION POINT
The AI‑powered network will support use cases like autonomous driving, drone detection, and industry‑wide analytics
Argument 8
India’s high mobile data usage positions it to lead AI‑driven transformation across manufacturing, smart cities, health, education, and agriculture
EXPLANATION
He pointed out that India’s massive per‑user mobile data consumption makes it uniquely positioned to spearhead AI‑enabled innovations across multiple sectors, building on its earlier leapfrogging from voice to mobile internet.
EVIDENCE
He pointed out that India’s massive mobile data consumption per user makes it a prime candidate to lead AI-driven transformation, citing the country’s early leapfrogging to mobile internet and its potential to generate new devices, experiences, and services across sectors [149-154].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s mobile-first data consumption and leapfrogging to mobile internet are presented as foundations for AI transformation [S6] and [S4].
MAJOR DISCUSSION POINT
India’s high mobile data usage positions it to lead AI‑driven transformation across manufacturing, smart cities, health, education, and agriculture
Argument 9
Democratizing AI technology can boost global welfare and reinforce India’s role as a manufacturing hub
EXPLANATION
Amon argued that making AI accessible to all will enhance worldwide well‑being and highlighted India’s emerging status as a global manufacturing centre, linking AI‑driven smart manufacturing, cities, health, education, and agriculture to broader socioeconomic gains.
EVIDENCE
Amon emphasized that democratizing AI will increase global welfare and highlighted India’s emerging role as a global manufacturing hub, linking AI-enabled smart manufacturing, smart cities, health, education, and agriculture to broader socioeconomic benefits [158-166][170-172].
MAJOR DISCUSSION POINT
Democratizing AI technology can boost global welfare and reinforce India’s role as a manufacturing hub
AGREED WITH
Speaker 1
Argument 10
Qualcomm’s unique semiconductor portfolio spans ultra‑low‑power chips to high‑performance data‑center processors, enabling AI everywhere
EXPLANATION
He highlighted Qualcomm’s breadth of semiconductor capabilities, from sub‑2 mW chips for earbuds to 2 000 W processors for data centres, positioning the company to power AI across the full spectrum of devices.
EVIDENCE
He noted Qualcomm’s unique position as a semiconductor company that designs chips ranging from sub-2 mW ultra-low-power solutions for earbuds to 2 000 W data-center processors, enabling AI capabilities across the entire device spectrum [103-106].
MAJOR DISCUSSION POINT
Qualcomm’s unique semiconductor portfolio spans ultra‑low‑power chips to high‑performance data‑center processors, enabling AI everywhere
Argument 11
The company focuses on enabling partners and industries rather than owning all innovation, positioning itself as an enabler of the AI ecosystem
EXPLANATION
Amon stated that Qualcomm’s strategy is to act as an enabler for partners and various industries, rather than claiming sole ownership of innovation, thereby fostering a broader AI ecosystem.
EVIDENCE
Amon stated that Qualcomm’s strategy is to enable partners and industries rather than claim sole ownership of innovation, positioning the firm as a catalyst for the broader AI ecosystem [172-174].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Qualcomm’s role as an enabler rather than sole innovator is emphasized in the keynote [S4] and [S6].
MAJOR DISCUSSION POINT
The company focuses on enabling partners and industries rather than owning all innovation, positioning itself as an enabler of the AI ecosystem
Agreements
Agreement Points
AI should be viewed as a human‑centric tool that empowers people and partners rather than an autonomous force that builds the future on its own
Speakers: Speaker 1, Cristiano Amon
AI as a Human‑Centric Tool – the future will be built by humans who confidently use AI The company focuses on enabling partners and industries rather than owning all innovation Democratizing AI technology can boost global welfare and reinforce India’s role as a manufacturing hub
Both speakers stress that AI is a technology that amplifies human capability and that the real drivers of future progress are people, partners and societies that adopt and steer AI, not the AI itself [2][172-174][170-172]
POLICY CONTEXT (KNOWLEDGE BASE)
The view aligns with human-centric AI policies such as the WSIS Action Line C10 on ethics and the inclusive AI guidelines that stress AI as a tool to augment human decision-making rather than replace it [S14][S15][S16][S26].
Similar Viewpoints
Both emphasize a human‑first approach to AI, positioning companies and societies as enablers rather than AI being an independent creator of value [2][172-174]
Speakers: Speaker 1, Cristiano Amon
AI as a Human‑Centric Tool – the future will be built by humans who confidently use AI The company focuses on enabling partners and industries rather than owning all innovation
Unexpected Consensus
Both speakers implicitly endorse the idea that AI’s greatest impact will be achieved through widespread, inclusive deployment rather than exclusive, proprietary control
Speakers: Speaker 1, Cristiano Amon
AI as a Human‑Centric Tool – the future will be built by humans who confidently use AI Democratizing AI technology can boost global welfare and reinforce India’s role as a manufacturing hub
While Speaker 1 focuses on the human-centric nature of AI, Amon extends this to a broader societal level, calling for democratization of AI to raise global welfare – a convergence that was not explicitly anticipated from the opening remarks [2][170-172]
POLICY CONTEXT (KNOWLEDGE BASE)
Both speakers’ emphasis on inclusive, widespread deployment mirrors multistakeholder AI ecosystem recommendations and the Global AI Policy Framework’s call for democratized access rather than concentration in the hands of a few [S21][S24].
Overall Assessment

The discussion shows a clear alignment between the opening remarks and the keynote on the principle that AI must serve humanity, with both speakers highlighting the role of people, partners and inclusive access as the engine of future innovation.

High consensus on the human‑centric, enabling view of AI; limited consensus on technical specifics (edge vs cloud, 6G, agents) as those were addressed only by Amon. The shared stance reinforces policy messages around responsible AI deployment, capacity building and inclusive digital development.

Differences
Different Viewpoints
Human‑centred AI versus AI‑driven agents as the primary platform
Speakers: Speaker 1, Cristiano Amon
AI as a Human‑Centric Tool – the future will be built by humans who confidently use AI Agents will replace conventional OSs and app stores, becoming the central platform for user intent
Speaker 1 stresses that AI should be viewed as a tool that amplifies human capability and that the future will be built by people who can confidently use AI, not by AI itself [2]. Amon, in contrast, envisions a future where intelligent agents supersede operating systems and app stores, acting autonomously on behalf of users and becoming the core platform of computing [27-35]. This reflects a divergence between a human-centric view of AI and a vision of AI taking a central, quasi-autonomous role.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between a human-centred platform and agentic AI reflects ongoing debates in AI governance, highlighted in discussions on agentic AI safety and the need for human oversight [S19][S25][S27].
Unexpected Differences
Extent of AI autonomy in replacing traditional software stacks
Speakers: Speaker 1, Cristiano Amon
AI as a Human‑Centric Tool – the future will be built by humans who confidently use AI Agents will replace conventional OSs and app stores, becoming the central platform for user intent
The introductory remarks by Speaker 1 focus on AI as a supportive tool, without anticipating a shift where AI agents would supplant operating systems and app marketplaces. Amon’s claim that agents will become the new platform is a more radical view of AI autonomy that was not hinted at in the opening, making this an unexpected point of divergence [2][27-35].
POLICY CONTEXT (KNOWLEDGE BASE)
Questions about AI autonomy versus traditional deterministic software stacks echo analyses of AI’s probabilistic nature and calls for human-in-the-loop safeguards in high-risk environments [S18][S27].
Overall Assessment

The primary disagreement centers on whether AI remains a human‑centric tool or evolves into autonomous agents that replace core software layers. Apart from this, the speakers largely concur on AI’s pervasiveness and the diminishing relevance of the cloud/edge debate. The disagreement is substantive but limited to the vision of AI’s role in the computing stack.

Moderate – the clash over AI’s autonomy could influence policy and industry strategies regarding governance, accountability, and the design of future digital ecosystems.

Partial Agreements
Both speakers acknowledge that AI is moving out of the cloud and into devices such as smartphones, cars, and factory equipment, making intelligence ubiquitous and reshaping human‑computer interaction [6][24-25,41].
Speakers: Speaker 1, Cristiano Amon
AI will become pervasive, running on edge devices and transforming how we interact with computers AI will become pervasive, running on edge devices and transforming how we interact with computers
While Speaker 1 emphasizes human agency, both agree that AI will be distributed across many devices and not confined to a single infrastructure, implying that humans will still interact with AI across the ecosystem [2][65-69].
Speakers: Speaker 1, Cristiano Amon
AI as a Human‑Centric Tool – the future will be built by humans who confidently use AI The cloud/edge distinction is less relevant; intelligence will be distributed across device, edge, and cloud
Takeaways
Key takeaways
AI will become a pervasive, human‑centric tool that runs on edge devices and transforms the human‑computer interface. Agentic AI will replace traditional OSs and app stores, acting as the central platform for user intent across multiple devices (phones, glasses, wearables, etc.). The distinction between cloud and edge is being reframed; intelligence will be distributed transparently, with real‑time, context‑specific tasks on‑device and broader processing in the cloud. 6G will go beyond higher speeds, embedding AI into the network to provide large‑scale sensing, context awareness, and new services such as autonomous‑driving support, drone detection, and industry analytics. India’s massive mobile data usage positions it to lead AI‑driven transformation in manufacturing, smart cities, healthcare, education, agriculture, and to reinforce its role as a global manufacturing hub. Qualcomm’s broad semiconductor portfolio (from sub‑2 mW chips to 2 kW data‑center processors) enables AI everywhere, and the company’s strategy is to act as an enabler for partners rather than to own all innovation.
Resolutions and action items
None identified
Unresolved issues
Specific timelines and roadmaps for deploying agentic AI platforms across consumer devices. Details on how security, privacy, and trust will be ensured in a highly distributed AI ecosystem. Standards and interoperability frameworks for AI‑enabled 6G networks and multi‑device agents. Economic models and incentives for partners to adopt Qualcomm’s AI‑edge solutions. Regulatory considerations for large‑scale data collection from devices such as smart glasses.
Suggested compromises
Reframe the cloud vs. edge debate as a complementary distribution of intelligence rather than an either/or choice, allowing both on‑device and cloud processing to coexist transparently.
Thought Provoking Comments
AI will replace the traditional OS and applications – the agent becomes the central interface that can be accessed from phones, glasses, wearables, and even the network itself.
This reframes the entire software stack, suggesting a paradigm shift from app‑centric computing to agent‑centric computing, which challenges the entrenched model of operating systems and app stores.
It pivoted the discussion from incremental AI improvements to a wholesale re‑imagining of device ecosystems. Listeners were prompted to consider how business models, developer platforms, and user experiences would need to evolve, opening the floor to talk about cross‑device agents and new value creation.
Speaker: Cristiano Amon
The debate about cloud vs. edge is the wrong way to look at it – intelligence will be distributed across cloud, near‑edge, network, and on‑device, working together transparently.
By dismissing a binary cloud/edge framing, Amon introduced a more nuanced, systems‑level view of AI deployment, emphasizing seamless integration rather than competition between infrastructures.
This comment shifted the tone from a technical tug‑of‑war to a collaborative vision, leading to subsequent explanations of how latency‑sensitive tasks stay on‑device while large‑scale learning runs in the cloud, and setting up the later discussion of 6G as an AI‑enabled network.
Speaker: Cristiano Amon
6G’s biggest story isn’t just faster speeds; it will be an AI‑powered sensing network that maps the environment at scale, providing context for agents and new services across industries.
It expands the conversation about next‑generation wireless beyond bandwidth, positioning AI as an intrinsic layer of the telecom fabric and linking connectivity to real‑world perception.
This introduced a new topic—AI‑embedded networks—and connected it to earlier points about agents needing context. It also set up a forward‑looking narrative about industry transformation, prompting listeners to envision applications in autonomous driving, drone detection, and smart cities.
Speaker: Cristiano Amon
AI is changing the human‑computer interface: we will no longer need to learn keyboards or touch screens because AI will understand what we see, hear, say, and write.
The comment broadens the impact of AI from a tool to a fundamental redesign of interaction, challenging the assumption that UI design will continue to dominate user experience.
It deepened the analysis by moving from device‑level capabilities to the experiential layer, encouraging the audience to think about accessibility, ergonomics, and the societal implications of a more intuitive interface.
Speaker: Cristiano Amon
India can once again leapfrog the world—just as it did with mobile internet—by becoming a global hub for AI‑driven manufacturing, smart cities, healthcare, education, and agriculture.
This ties the technical vision to a concrete geopolitical and economic narrative, highlighting how regional ecosystems can shape and benefit from the AI transition.
It shifted the conversation toward real‑world opportunity and policy, prompting listeners to consider how local talent, manufacturing capacity, and regulatory frameworks could accelerate AI adoption, and reinforcing the summit’s ambition of democratizing technology.
Speaker: Cristiano Amon
Overall Assessment

The identified comments acted as catalytic moments that transformed the presentation from a series of technical updates into a forward‑looking, ecosystem‑wide narrative. By redefining the software stack around agents, reframing cloud/edge debates, positioning AI as the core of 6G networks, and linking these trends to human interaction and regional opportunity, Cristiano Amon steered the audience toward a holistic view of AI’s pervasive role. Each insight opened new thematic avenues—business models, infrastructure design, societal impact, and economic strategy—thereby deepening the discussion and setting a strategic tone for the remainder of the summit.

Follow-up Questions
How will the shift from traditional OS and app ecosystems to an agent‑centric model be realized, and what are the implications for developers and users?
Understanding this transition is crucial because agents could become the primary interface for devices, potentially replacing existing platforms and reshaping the software ecosystem.
Speaker: Cristiano Amon
What are the technical and architectural challenges of distributing AI workloads across cloud, near‑edge, and on‑device environments while maintaining seamless performance?
Clarifying these challenges is important to ensure low‑latency, context‑aware responses and to guide the design of future AI‑enabled hardware and networks.
Speaker: Cristiano Amon
How can privacy and security be ensured when agents continuously collect and process massive personal data from devices such as smart glasses, earbuds, and wearables?
Addressing privacy concerns is essential for user trust and regulatory compliance as agents rely on extensive personal context to function effectively.
Speaker: Cristiano Amon
What standards, protocols, and security frameworks are needed to support a 6G AI‑enabled sensing network that can map environments at scale?
Defining these standards will be key to building interoperable, reliable, and secure infrastructure for the next generation of connectivity.
Speaker: Cristiano Amon
What specific opportunities and strategies should India pursue to leverage AI‑driven transformation in manufacturing, smart cities, healthcare, education, and agriculture?
Identifying actionable pathways will help India capitalize on its large mobile data consumption and position itself as a global AI hub.
Speaker: Cristiano Amon
What are the performance, power, and thermal requirements for AI chips that span from sub‑2 mW wearables to 2 kW data‑center processors, and how can a single company address this spectrum?
Understanding these requirements is vital for designing scalable semiconductor solutions that can power agents across all device categories.
Speaker: Cristiano Amon
What are the viable business models and use cases for AI agents deployed on diverse form factors such as smart glasses, pendants, earbuds, and other wearables?
Exploring these models will guide product development and ecosystem partnerships, ensuring agents deliver value across multiple device types.
Speaker: Cristiano Amon
What timeline and adoption roadmap can be expected for consumer‑grade smart glasses and other agent‑enabled devices, and what barriers must be overcome?
Projecting adoption rates helps stakeholders plan investments, address technical hurdles, and align market expectations.
Speaker: Cristiano Amon
How should latency‑sensitive tasks be allocated between on‑device processing and cloud/edge resources to optimize user experience?
Optimizing task placement is critical for delivering instant, context‑aware responses while managing network load and device capabilities.
Speaker: Cristiano Amon

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session featured Giordano Albertazzi, CEO of Vertiv, a company that supplies digital infrastructure for data centers and communication networks [1-3]. Albertazzi opened by noting that most AI discussions focus on capabilities, but he intended to address the often-overlooked physical infrastructure that makes AI possible [9-16]. He highlighted the rapid densification of compute, explaining that modern GPU-driven racks are moving from 10-20 kW to 30-150 kW and potentially up to one megawatt per rack, fundamentally changing data-center design [25-33][34-35].


To support this shift, Vertiv advocates treating power, cooling, and power-train components as an integrated “body” rather than separate systems, requiring full orchestration and interoperability [45-48]. The company is developing 800-volt DC power architectures and advanced thermal chains to move heat from chips to extraction and reuse, reflecting the evolving power-density demands [64-66][73-75]. Albertazzi emphasized Vertiv’s modular, prefabricated solutions-such as VertiSmart Run-that can cut deployment time by roughly 85 % compared with traditional on-site builds [91-97].


He also stressed the strategic importance of India, citing abundant power, favorable demographics, and Vertiv’s long-standing presence as reasons to expand capacity and position the country as a global AI hub [99-112]. Vertiv’s partnership with NVIDIA is presented as a joint effort to create reference designs that target AI workloads and accelerate market leadership [54-58]. Speaker 3 later contrasted conventional sequential construction with prefabricated modular approaches, describing Vertiv OneCore as a system of factory-built power and thermal blocks installed in a steel shell or existing building [118-124]. This OneCore concept combines the speed of modular construction with the reliability of tested components, addressing the growing IT loads driven by AI [120-124].


Across the remarks, both presenters agreed that the speed, scale, and density of AI-driven data centers demand new, resilient infrastructure that can be deployed rapidly and operate at gigawatt levels [87-92]. The discussion concluded that integrating power, cooling, and modular construction is essential for meeting future AI infrastructure needs worldwide [78-82][96-98]. Overall, the speakers conveyed that Vertiv’s integrated, prefabricated solutions and strategic focus on markets like India are positioned to support the accelerating evolution of AI data centers [52-57][99-107].


Keypoints


Major discussion points


The physical infrastructure (power, cooling, and density) is the foundation that makes AI possible.


Albertazzi stresses that AI’s “physical part” – power delivery, thermal management and extreme rack densification – is often overlooked but essential for the AI stack to function [13-18][25-33][64-66][70-76].


Vertiv is moving toward fully integrated, modular and prefabricated data-center solutions to accelerate deployment.


He describes the need for “orchestrated” and interoperable systems, the dramatic reduction in build time through prefabrication (up to 85 % faster), and introduces the Vertiv OneCore approach that delivers pre-engineered power-thermal blocks in a factory-built shell [46-49][91-97][119-125].


India is positioned as a strategic hub for AI infrastructure, and Vertiv is expanding its presence there.


The speaker highlights India’s abundant power, favorable demographics, and the company’s long-standing operations and plans to increase capacity and investment in the market [52][98-108][109-113].


Collaboration with NVIDIA and the use of reference designs are key to optimizing AI data-center performance.


Albertazzi notes the close partnership with NVIDIA, leveraging joint reference designs that target AI workloads and help Vertiv lead the market [54-58].


Overall purpose / goal


The discussion aims to showcase Vertiv’s expertise and innovations in the physical infrastructure that underpins AI workloads, promote its modular and prefabricated solutions (especially Vertiv OneCore), and underline the company’s strategic commitment to scaling AI-ready data centers-particularly in India-through partnerships such as the one with NVIDIA.


Overall tone


The tone is consistently upbeat, confident, and promotional. Albertazzi begins with a technical, informative style, then shifts to an enthusiastic, optimistic narrative-especially when describing Vertiv’s capabilities, the speed of deployment, and the growth prospects in India-culminating in a forward-looking, optimistic closing. No major tonal shift to negativity or criticism occurs; the optimism builds toward the end.


Speakers

Giordano Albertazzi – Chief Executive Officer, Vertiv; expertise in digital infrastructure, AI, and data-center solutions [S4][S5][S6]


Speaker 1 – Event host / moderator (introduced speakers and provided closing remarks) [S7][S9]


Speaker 3 – Panel participant; role and area of expertise not specified


Additional speakers:


Mr. El-Battazi – Referenced by Speaker 1; role and expertise not specified


Full session reportComprehensive analysis and detailed insights

The session opened with the moderator introducing Giordano Albertazzi, chief executive officer of Vertiv, a global supplier of digital-infrastructure solutions for data-centres and communication networks, and highlighting Vertiv’s role in powering critical applications worldwide [1-3]. Albertazzi thanked the audience, noted the prevalence of AI discussions centred on software capabilities, and announced that he would focus on the often-overlooked “physical part” of AI – the power, cooling and density challenges that make the AI stack possible [9-16]. Albertazzi reminded the audience that Vertiv originated as part of Emerson Electric and has been an independent, NYSE-listed company for nearly a decade [15-18].


Physical challenges of AI workloads


Albertazzi described how the rapid adoption of GPU-driven AI workloads is driving extreme rack densification: racks that once consumed 10-20 kW are now reaching 30-150 kW and may eventually approach 1 MW per rack, fundamentally reshaping data-centre architecture [25-33][34-35]. To accommodate this evolution Vertiv is concentrating on three inter-related infrastructure layers – the power-train that carries energy from the utility to the chip, the thermal chain that extracts and potentially re-uses the heat generated by high-density compute, and the integration of these subsystems into a single, orchestrated “body” rather than a collection of disparate components [64-66][70-76][45-48]. Using a biological analogy, Albertazzi compared the AI “brain” (the IT stack) to a human brain that cannot function without a body, emphasizing Vertiv’s mission to provide that supporting power-and-cooling “body” [38-44][46-48].


He noted that industry architectures are gradually migrating toward an 800-volt DC power-train to better handle rising power densities [66-67], and that the thermal chain now extends from the chip through advanced cooling mechanisms to heat-extraction and, crucially, to heat-reuse systems, thereby improving overall energy efficiency [73-75][76].


Integrated, modular solutions


To meet the speed and scale demanded by AI, Vertiv is championing modular, prefabricated data-centre solutions. He highlighted the VertiSmart Run approach, which delivers factory-built, pre-tested power and cooling modules that can slash on-site construction time by roughly 85 % – an order-of-magnitude improvement over traditional builds [91-97].


Speaker 3 then introduced Vertiv OneCore, a solution that inserts pre-engineered power-thermal blocks into a steel-shell building (new or existing), combining the reliability of factory testing with the rapid deployment of modular construction [118-119][118-125][60-62]. This system is designed for higher efficiency and supports a modular building-block capacity range from 12.5 MW up to gigawatt-scale operation [87-89][90-92].


Albertazzi also reframed the notion of the compute unit, arguing that the server is no longer the fundamental building block; instead, AI workloads are organised into “pods”, and ultimately the entire data-centre functions as a single, massive computer capable of gigawatt-scale operation [78-84]. This shift underscores the need for converged, scalable infrastructure that can be deployed quickly and operated as a unified system [87-89].


India was presented as a strategic hub for this emerging AI infrastructure. Albertazzi cited the country’s abundant power supply, favourable demographics and Vertiv’s long-standing presence as key enablers for large-scale AI data-centres, announcing plans to expand capacity and investment in India and positioning the nation as a global AI hub [52][98-108][109-113].


Vertiv highlighted its collaboration with NVIDIA, which yields joint reference designs tailored to AI workloads and helps Vertiv maintain a leadership position in the AI-ready data-centre segment [44-58].


Across the three speakers there was strong consensus that robust, integrated power-train and thermal infrastructure is indispensable for AI, and that modular, prefabricated construction is the most effective way to achieve rapid, large-scale roll-outs. Both the moderator’s introductory framing of digital infrastructure and the technical elaborations by Albertazzi and Speaker 3 converge on the need for interoperable, pre-tested components that reduce build times while supporting extreme power densities [1][13-18][46-48][119-125].


In conclusion, Albertazzi closed by expressing strong optimism for AI’s future and gratitude to the audience [115-117]. The discussion positioned Vertiv as a key enabler of the AI revolution, offering integrated power-train and thermal solutions, high-voltage DC architectures, and modular construction methods such as VertiSmart Run and OneCore. The company’s strategic focus on India and its collaboration with NVIDIA were highlighted as drivers of future growth, while acknowledging ongoing challenges around the detailed road-map for 800-V DC adoption, large-scale heat-reuse implementation, and supply-chain or regulatory hurdles associated with ultra-rapid deployments.


Session transcriptComplete transcript of the session
Speaker 1

Well, ladies and gentlemen, now it’s my pleasure to invite our next speaker, Mr. Giordano Albertazzi, who is the chief executive officer of Vertiv, a global company that provides digital infrastructure solutions for data centers, communication networks. Under his leadership, Vertiv is advancing its role as a global industry leader by accelerating innovation, strengthening technology leadership, and enabling the digital infrastructure that powers critical applications worldwide. Ladies and gentlemen, please welcome Mr. Giordano Albertazzi.

Giordano Albertazzi

Thank you very much. The clicker? Oh, yeah, here. Better with the clicker. Good afternoon, everyone. And it’s absolutely a pleasure and an honor being on this stage where so many distinguished presenters. In the last two days, I’ve had the opportunity to talk about AI. An astonishing thing to me is that the majority of the AI conversations, as it should be, are about what AI can do. Very interesting presentation, just finished, tells about all the beautiful things that AI can do and particularly what AI can do here in India. But when we talk AI, we also talk about data centers. But let me go then to the physical part of AI, not just what AI can do for us.

There is an important, very important physical part of AI that sometimes is overlooked. And it shouldn’t, because it’s that physical part that makes AI actually possible. So I’ll talk about the physical part today. I will talk about the power. The cooling, the data center infrastructure. Vertiv and myself, with Vertiv, have been in the industry for decades. Well, Vertiv longer than me. It used to be part of Emerson, Emerson Electric, and we are almost 10 years as an independent company now publicly traded in New York. But what we do is really make sure that that physical part is provided with the best technology that supports the continuous evolution of the AI IT stack as those rapidly, almost exponentially, and I’m talking almost exponentially from a mathematical standpoint, evolve.

And it’s no easy task, a task that we would do very well because we know the space a lot. We have a lot of innovation. But there are several dimensions. to this. One is the extreme densification. Now, we all know what GPUs are. Probably two years ago, majority of people didn’t have any clue about what a GPU is. But now, GPU, NVIDIA is absolutely central to everything, all the conversation about AI. Well, that phenomenal evolution from a technology standpoint is changing the DNA of a data center. What used to be a rack with IT inside 10, 15, 20 kilowatt per rack is rapidly becoming more dense and with more power and heat to dissipate in it. This is going to 30, 50, 150 kilowatts per rack all the way in the possible future.

One megawatt per rack. That’s a lot of power in a single rack. The design of a data center is changing dramatically. As this design changes, of course, also the technology that supports it needs to change. But let me go back to AI, artificial intelligence. Let me go back to, and let me draw a parallel. Human intelligence. Human intelligence happen in the brain. But the brain doesn’t survive without a body. What we are, what we do at Vertiv, make that body, provide that technology for that body so that the brain can function, and that brain is the IT stack. But not only that brain can function, but also can produce intelligence. And that’s what an AI does.

That’s what an AI factory, an AI data center is doing. But just like the body, historically, data centers and data center engineering was viewed as disparate systems coming together. Now, we cannot think about a human body or anybody as individual parts, a chiller, a liquid cooling unit, an interruptible power supply, or whatever else in the powertrain or thermal chain you can think of. Everything needs to be orchestrated. Everything needs to be interoperable. Everything must be thought of as one thing. And that’s what we do in a world that is extraordinarily challenging, but it’s a challenge that we, of course, respond to very successfully, challenging in terms of time of deployment and in terms of scale of deployment.

Okay. Data centers need to be. developed faster and faster and are becoming bigger and bigger. You heard that. It is about data centers. India is certainly privileged from an AI standpoint also because there is a lot of power available that can be harnessed for more and more powerful and larger data centers. Now, as that happens, again, if you think, go back to my analogy of the body, you think about a system, you think about everything that is the body of artificial intelligence, then it is about changing the way we build that body from one piece at a time with a lot of activity happening on site, laborious, hard from a quality standpoint, to most integrated at factory level and deployment.

As NVIDIA continues to lead the world in terms of technology, in terms of IT stack, but also in terms of thought process for the infrastructure. And it’s something that we do a lot together. Well, then it is not just about the infrastructure and the speed and the size and the scale. It’s also about optimizing the infrastructure with reference designs that exactly target that type of application. So we, of course, are thrilled and always honored to partner with NVIDIA in this adventure and venture and lead the market in this respect. So here you have an example of what we call one vertigo, one core. There’s an example of a fully pre -engineered, defined data center. But when we talk about the body, the body of AI, the data center, then let’s talk very simply.

We talk about three fundamental, fundamental elements of that body. One is the powertrain. So everything that goes from the grid, if you will, for your utility, takes that power all the way to the chip. That power infrastructure is changing, is evolving as the power density changes. And the current architectures are migrating towards, over time, what is an 800 -volt DC power infrastructure. I’m going technical on you. Some of you I know are very technical, so I’m not afraid about that. But I will not go deep. So everything you see on the left side of this is exactly a representation of that powertrain. So bring the energy, take that energy to the chip. Then the chip and all the electronic components in a server generate heat.

And that heat can be very dense. And require very, very advanced… cooling mechanisms. And that’s the beginning of what we like to call a thermal chain that starts, and it’s what you see on the right side of this chart from the chip all the way to the heat extraction, the heat then rejection, or even more importantly, and more extensively so, is the heat reuse. So this is the system, the fundamental systems of this body. But again, it’s not just about the components of the system, it’s how the entire system works. And more and more, we see that when we think about the AI IT infrastructure, what used to be thought of as a server at a time is becoming really an AI, AI.

pod, an AI unit at a time. The unit of compute is no more the server. It is the pod. Unit of compute is not even the pod. It’s the entire data center operating as one single computer. A unit of compute that can go all the way to gigawatts. So it is about making sure, and I believe we do it very well, that I say uniquely well, but of course I root for ourselves. It is about making that infrastructure available at scale and in a very easy modular to deploy fashion. And that’s what we do. So a repeatable converged infrastructure major. So we have a lot of building blocks that can go from a 12 .5 megawatts all the way to gigawatts, all the way to gigawatts.

So clearly it’s not just about building that infrastructure, but that infrastructure over time needs to be, like we like to say, future resilient. Some people, like myself, have been in the industry of data center for quite some time, and it’s fascinating the speed at which things happen. And this speed is also enabled by new solutions that make prefabricated and very fast to deploy part of data centers that used to be very, very laborious. Take a data. It’s empty when the building is new. You have to fill it with power, with cooling, with cables. You have to put the racks. very laborious and time consuming. Time to token is of the essence. A prefabrication, for example, with what we do with Verti Smart Run, reduces the time to deploy almost 85%, almost an order of magnitude.

So the industry is changing, not only in scale and in density, but also in the way things are done and deployed. Let me take a different angle now and focus on India. India clearly central to the AI evolution revolution and central certainly in terms of the infrastructure that is being built and the infrastructure that will be built in the future and in the coming years. This infrastructure and the speed at which this infrastructure will be built, of course, will depend, as I was saying, by the ability of the likes of Vertiv, but certainly Vertiv, given our prominent position also in India, to really enable this at scale and at speed in the ways that I explained.

So Vertiv in India has a long tradition. We’ve been here for decades. We have what I believe is an awesome team and awesome partnerships. And now this forum, these sessions, these few days convinced me even more of the importance of India as a place. A place to invest. And invest we will. We are expanding our capacity and will continue to expand capacity. We see India certainly as an extremely promising market. as a hub for AI, not only for India, but globally. So it has got the power availability. Certainly India has got the right demographics. So I couldn’t be more excited about the business in India. I couldn’t be more excited about what we’re doing in India and what our partners are doing in India.

So with that, I’m extremely optimistic. I’m a big optimist about what AI will bring, as we heard. And with that, thank you very much. Thank you.

Speaker 1

Thank you so much, Mr. El -Battazi, for your impactful address and also for…

Speaker 3

Data centers have, up until now, been usually constructed in one of two ways. Traditional data center build follows a sequential process, materials and equipment arriving individually on site, with the build progressing from the ground up. Alternatively, prefabricated modular construction can offer many advantages, such as quicker deployments and risk reduction. Vertiv offers many solutions in this space. However, in the age of increasing IT loads powered by artificial intelligence, there’s another option that combines the advantages of both. Vertiv OneCore. Vertiv power and thermal infrastructure building blocks are inserted into a brand new Vertiv -supplied steel building shell, or an existing building. Infrastructure building blocks are made in controlled factory environments and tested before construction. The system is also equipped with a new, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient,

Related ResourcesKnowledge base sources related to the discussion topics (10)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Giordano Albertazzi is the chief executive officer of Vertiv.”

The knowledge base identifies Albertazzi as the CEO of Vertiv, confirming his role [S5] and [S6].

Confirmedhigh

“Albertazzi emphasized that the physical component—encompassing power systems, cooling mechanisms, and data‑centre infrastructure—is the foundational layer that makes AI possible.”

Both S6 and S5 state that Albertazzi highlighted the physical infrastructure (power, cooling, data‑centre) as the essential layer for AI, corroborating the claim.

Confirmedmedium

“He described three inter‑related infrastructure layers – the power‑train, the thermal chain, and their integration into a single orchestrated “body”.”

The knowledge base notes that Albertazzi stresses the importance of power systems, cooling mechanisms, and overall data-centre infrastructure as a unified foundation, aligning with the three-layer description [S6] and [S5].

Confirmedmedium

“Albertazzi highlighted the need for speed at scale, stating that faster deployment of GPU structures accelerates AI benefits.”

S41 discusses Albertazzi’s focus on speed at scale and the importance of rapid GPU deployment, confirming this point.

External Sources (45)
S1
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S2
S4
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — 1705 words | 123 words per minute | Duration: 830 secondss Human intelligence happen in the brain. But the brain doesn’…
S7
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S8
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S9
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S10
AI as critical infrastructure for continuity in public services — Speakers:Atsuko Okuda, Edyta Gorzon Speakers:Atsuko Okuda, J.J. Singh, Mariusz Kura, Lidia Speakers:Chengetai Masango,…
S11
Multistakeholder Partnerships for Thriving AI Ecosystems — Speaker 1 argues that the fundamental challenge is not just data fragmentation but the lack of adequate sensing infrastr…
S12
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Albertazzi commits to increasing Vertiv’s operations in India across multiple dimensions including manufacturing capabil…
S13
From KW to GW Scaling the Infrastructure of the Global AI Economy — The speakers stressed the importance of reference designs and standardised approaches to achieve this transformation eff…
S14
Keynote-Jeet Adani — Distinguished global leaders, innovators and friends, good afternoon and namaste. We gather here today at a decisive inf…
S15
https://app.faicon.ai/ai-impact-summit-2026/inclusive-ai-starts-with-people-not-just-algorithms — So we’re going to give like 30 seconds to each of the panelists as they close. I mean, I think on learning you just star…
S16
Next-Gen Industrial Infrastructure / Davos 2025 — Huang Shan: Can we start? OK. So before we start today’s discussion, I think it’s always hard to compete with Presid…
S17
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Evidence:He notes that data centers are essentially giant boxes providing power and cooling that can adapt to different …
S18
Building Climate-Resilient Systems with AI — How do you execute on it? How do you start delivering the outcome that I think we all are looking for? So that’s the kin…
S19
The Global Power Shift India’s Rise in AI & Semiconductors — The panelists emphasized that true AI leadership requires alignment across four key pillars: silicon, software, systems,…
S20
Keynote-Jeet Adani — As we all know, under peak load, advanced processors generate extraordinary heat. Systems throttle when power falters an…
S21
Hyperscale data centres planned under Meta and NVIDIA deal — Metaannounced a multiyear partnership with NVIDIAto build large-scale AI infrastructure across on-premises and cloud sys…
S22
Phaidra’s AI solution aims to optimise data centre energy consumption — Phaidra, a technology company, hasunveileda newartificial intelligence(AI) platform designed to enhance energy managemen…
S23
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Summary:Both speakers agree that prefabricated modular construction offers substantial benefits over traditional sequent…
S24
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Both speakers agree that prefabricated modular construction offers substantial benefits over traditional sequential buil…
S25
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — -Speaker 1: Event moderator/host (role inferred from context)
S26
The Global Power Shift India’s Rise in AI & Semiconductors — Thank you. Thank you. across CPUs, GPUs, SoCs, and AI engines that power cutting -edge compute systems worldwide. She br…
S27
Building Population-Scale Digital Public Infrastructure for AI — Speaker 1 serves as a session facilitator, managing the transition between speakers and organizing panel participants fo…
S28
Skilling and Education in AI — In two significant areas, one is in agriculture, which is the highest employer, biggest employer anywhere. It’s also one…
S29
National Disaster Management Authority — High-performance AI systems with hundreds of thousands of GPUs and CPUs require enormous power consumption and water coo…
S30
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Summary:There is unanimous agreement that power and energy constraints represent fundamental challenges that must be add…
S31
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — India faces physical constraints of land, water, and power that will drive infrastructure setup decisions There is unan…
S32
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S33
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Backup generators activated but ran out of fuel after about an hour due to faulty automated refueling systems exacerbati…
S34
Biden issues order to boost AI data centre infrastructure and energy supply — President Joe Biden hassignedan executive order to support the rapid expansion ofAI data centresby providing federal lan…
S35
The Innovation Beneath AI: The US-India Partnership powering the AI Era — The panel opened with Kumar’s observation that whilst AI models receive significant attention, the underlying infrastruc…
S36
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — A central theme of Albertazzi’s presentation focused on the dramatic transformation occurring in data centre design due …
S37
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Thank you very much. The clicker? Oh, yeah, here. Better with the clicker. Good afternoon, everyone. And it’s absolutely…
S38
The Innovation Beneath AI: The US-India Partnership powering the AI Era — The physical infrastructure layer of it is fascinating. And that goes from everything from the foundational layer that y…
S39
Building Climate-Resilient Systems with AI — How do you execute on it? How do you start delivering the outcome that I think we all are looking for? So that’s the kin…
S40
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Okay, good. Thank you. Thank you all for joining and I appreciate it. I am being pitched against my boss, so I’m going t…
S41
From KW to GW Scaling the Infrastructure of the Global AI Economy — Industry partners are implementing NVIDIA-ready data center certification programs for faster deployment NVIDIA and Ver…
S42
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Albertazzi commits to increasing Vertiv’s operations in India across multiple dimensions including manufacturing capabil…
S43
The Global Power Shift India’s Rise in AI & Semiconductors — The panelists emphasized that true AI leadership requires alignment across four key pillars: silicon, software, systems,…
S44
From KW to GW Scaling the Infrastructure of the Global AI Economy — The speakers stressed the importance of reference designs and standardised approaches to achieve this transformation eff…
S45
Hyperscale data centres planned under Meta and NVIDIA deal — Metaannounced a multiyear partnership with NVIDIAto build large-scale AI infrastructure across on-premises and cloud sys…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument131 words per minute88 words40 seconds
Argument 1
Recognition of the critical role of infrastructure in AI discussions
EXPLANATION
Speaker 1 acknowledges that conversations about artificial intelligence must include the underlying physical infrastructure. By highlighting this, the speaker stresses that power and cooling systems are as important as AI capabilities themselves.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Albertazzi’s keynote stresses that AI cannot operate without its supporting power and cooling infrastructure, highlighting the physical layer as essential for AI deployments [S6][S5].
MAJOR DISCUSSION POINT
Recognition of the critical role of infrastructure in AI discussions
AGREED WITH
Giordano Albertazzi
G
Giordano Albertazzi
6 arguments123 words per minute1705 words830 seconds
Argument 1
AI depends on robust power and cooling systems
EXPLANATION
Giordano stresses that the physical layer—specifically reliable power delivery and advanced cooling—is essential for AI to function. Without this infrastructure, the computational workloads that define AI cannot be sustained.
EVIDENCE
He describes a “very important physical part of AI” that makes AI possible and outlines the need for power and cooling in data-center infrastructure, noting that the power-train and thermal chain must be orchestrated to support AI workloads [13-18][64-75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The presentation underlines the need for reliable power delivery and advanced cooling as the foundation of AI workloads, describing this as the “very important physical part of AI” [S6][S5].
MAJOR DISCUSSION POINT
AI depends on robust power and cooling systems
AGREED WITH
Speaker 3
Argument 2
GPU‑driven AI creates extreme rack power densities
EXPLANATION
Giordano explains that the rise of GPUs for AI has dramatically increased the power demand per rack, moving from tens of kilowatts to potentially megawatt levels. This densification forces a redesign of data‑center architecture.
EVIDENCE
He notes that GPUs are now central to AI, and that racks that used to consume 10-20 kW are evolving toward 30, 50, 150 kW and even up to one megawatt per rack in the future [25-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Albertazzi notes the dramatic shift from 10-20 kW per rack to 30-50 kW and beyond, driven by GPU-centric AI workloads, requiring new data-center architectures [S6][S5].
MAJOR DISCUSSION POINT
GPU‑driven AI creates extreme rack power densities
Argument 3
Prefabricated, modular solutions (e.g., Verti Smart Run) cut deployment time by up to 85%
EXPLANATION
Giordano highlights that prefabricated, factory‑built data‑center components dramatically shorten construction cycles. The Verti Smart Run approach can reduce deployment time by roughly 85 %, enabling faster scaling of AI infrastructure.
EVIDENCE
He describes how traditional on-site build is laborious and time-consuming, whereas a prefabricated solution like Verti Smart Run reduces deployment time by “almost 85%, almost an order of magnitude” [91-97].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He cites that prefabricated, factory-built solutions can reduce construction cycles by roughly 85 %, enabling rapid AI-infrastructure scaling [S6][S5].
MAJOR DISCUSSION POINT
Prefabricated, modular solutions (e.g., Verti Smart Run) cut deployment time by up to 85%
AGREED WITH
Speaker 3
Argument 4
Abundant power, favorable demographics make India a global AI hub
EXPLANATION
Giordano points out that India’s large power reserves and demographic advantages position it as a strategic location for AI data‑centers. He argues that these factors make India attractive not only locally but also as a hub for global AI services.
EVIDENCE
He states that “India is certainly privileged… because there is a lot of power available” and later adds that India has “the right demographics” and is “an extremely promising market as a hub for AI” [52][110-111].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Albertazzi positions India as a strategic AI hub because of its large power reserves and demographic advantages, describing the country as “privileged” with ample electricity [S6][S5].
MAJOR DISCUSSION POINT
Abundant power, favorable demographics make India a global AI hub
Argument 5
Vertiv’s long‑standing presence and planned capacity expansion position it to meet India’s AI demand
EXPLANATION
Giordano emphasizes Vertiv’s decades‑long footprint in India, its experienced team, and ongoing capacity‑expansion plans. These assets, he says, enable Vertiv to support the rapid growth of AI infrastructure in the country.
EVIDENCE
He notes that “Vertiv in India has a long tradition… we’ve been here for decades,” and that the company is “expanding our capacity and will continue to expand capacity” to serve the market [101-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He highlights Vertiv’s decades-long footprint in India and ongoing capacity-expansion plans to support growing AI workloads [S6][S12].
MAJOR DISCUSSION POINT
Vertiv’s long‑standing presence and planned capacity expansion position it to meet India’s AI demand
Argument 6
Collaboration with NVIDIA yields optimized reference designs for AI workloads
EXPLANATION
Giordano describes a partnership with NVIDIA that produces reference designs tailored to AI applications. These designs aim to optimize power and thermal efficiency for high‑performance AI workloads.
EVIDENCE
He mentions that “as NVIDIA continues to lead… we are thrilled and always honored to partner with NVIDIA… to optimize the infrastructure with reference designs that exactly target that type of application” [54-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The partnership with NVIDIA is presented as a way to create reference designs that optimize power and thermal efficiency for AI applications [S6][S13].
MAJOR DISCUSSION POINT
Collaboration with NVIDIA yields optimized reference designs for AI workloads
S
Speaker 3
2 arguments156 words per minute149 words57 seconds
Argument 1
Vertiv OneCore combines factory‑built power/thermal blocks with a steel shell for rapid, efficient builds
EXPLANATION
Speaker 3 explains that Vertiv OneCore integrates pre‑tested power and thermal modules into a steel building envelope, whether new or existing. This factory‑centric approach accelerates construction and improves reliability.
EVIDENCE
He outlines that “Vertiv OneCore… power and thermal infrastructure building blocks are inserted into a brand new Vertiv-supplied steel building shell, or an existing building… building blocks are made in controlled factory environments and tested before construction” [123-125].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
OneCore’s modular approach integrates pre-tested power and thermal blocks into a steel envelope, accelerating construction and improving reliability [S6][S5].
MAJOR DISCUSSION POINT
Vertiv OneCore combines factory‑built power/thermal blocks with a steel shell for rapid, efficient builds
AGREED WITH
Giordano Albertazzi
Argument 2
Vertiv’s comprehensive power and thermal infrastructure portfolio supports scalable AI deployments
EXPLANATION
Speaker 3 highlights Vertiv’s broad suite of power and cooling solutions that can be scaled from small to gigawatt‑class data centres. This portfolio enables customers to meet the growing demands of AI workloads.
EVIDENCE
He states that “Vertiv offers many solutions in this space” and then describes the OneCore modular system as part of that comprehensive offering for AI-intensive data centres [118-125].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vertiv’s portfolio, including the OneCore system, can scale from tens of megawatts to gigawatt-class data centres, meeting the demands of AI-intensive workloads [S5][S6].
MAJOR DISCUSSION POINT
Vertiv’s comprehensive power and thermal infrastructure portfolio supports scalable AI deployments
Agreements
Agreement Points
The physical infrastructure (power and cooling) is essential for AI deployments
Speakers: Speaker 1, Giordano Albertazzi
Recognition of the critical role of infrastructure in AI discussions AI depends on robust power and cooling systems
Both speakers stress that AI cannot function without reliable power delivery and advanced cooling, highlighting the infrastructure as a foundational element of AI systems [1][13-18][64-75].
POLICY CONTEXT (KNOWLEDGE BASE)
National and industry analyses stress that high-performance AI systems demand massive electricity and water-based cooling, a constraint recognized in disaster-management guidance and in policy initiatives such as the U.S. executive order allocating federal land and resources to expand AI data-center power and energy capacity [S29][S30][S34][S35].
Prefabricated, modular data‑center solutions dramatically shorten deployment time
Speakers: Giordano Albertazzi, Speaker 3
Prefabricated, modular solutions (e.g., Verti Smart Run) cut deployment time by up to 85% Vertiv OneCore combines factory‑built power/thermal blocks with a steel shell for rapid, efficient builds
Both speakers describe factory-built, modular approaches (Verti Smart Run and Vertiv OneCore) that reduce construction cycles by roughly 85 % and enable faster, more reliable AI-focused data-center roll-outs [91-97][123-125].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent keynote discussions highlighted that prefabricated, modular construction can cut data-center build times by up to 85 % and reduce project risk, providing a proven fast-track approach for AI infrastructure rollout [S23][S24].
AI data‑center infrastructure must be fully integrated and orchestrated across power and thermal subsystems
Speakers: Giordano Albertazzi, Speaker 3
AI depends on robust power and cooling systems Vertiv OneCore combines factory‑built power/thermal blocks with a steel shell for rapid, efficient builds
Both speakers emphasize that power-train and thermal chain components need to work together as a single, interoperable system to support high-density AI workloads [46-48][123-125].
POLICY CONTEXT (KNOWLEDGE BASE)
Expert reports underline the need for coordinated power-and-cooling management in large-scale AI facilities, citing energy-policy challenges and sustainability goals that call for holistic orchestration of power and thermal subsystems [S29][S30][S32].
Similar Viewpoints
Both acknowledge that discussions of AI must include the underlying power and cooling infrastructure that makes AI workloads possible [1][13-18][64-75].
Speakers: Speaker 1, Giordano Albertazzi
Recognition of the critical role of infrastructure in AI discussions AI depends on robust power and cooling systems
Both promote factory‑built, modular data‑center components as a way to accelerate AI infrastructure deployment while maintaining quality and reliability [91-97][123-125].
Speakers: Giordano Albertazzi, Speaker 3
Prefabricated, modular solutions (e.g., Verti Smart Run) cut deployment time by up to 85% Vertiv OneCore combines factory‑built power/thermal blocks with a steel shell for rapid, efficient builds
Both stress the need for an integrated power‑train and thermal chain that is pre‑tested and interoperable to support extreme AI compute densities [46-48][123-125].
Speakers: Giordano Albertazzi, Speaker 3
AI depends on robust power and cooling systems Vertiv OneCore combines factory‑built power/thermal blocks with a steel shell for rapid, efficient builds
Unexpected Consensus
Moderator (Speaker 1) aligns with technical speakers on the centrality of infrastructure for AI
Speakers: Speaker 1, Giordano Albertazzi, Speaker 3
Recognition of the critical role of infrastructure in AI discussions AI depends on robust power and cooling systems Vertiv OneCore combines factory‑built power/thermal blocks with a steel shell for rapid, efficient builds
Although Speaker 1’s role is introductory, the opening remarks reference digital infrastructure that powers critical applications, which mirrors the detailed technical emphasis on power and cooling by Giordano and the modular integration highlighted by Speaker 3. This alignment across a moderator and two technical presenters was not anticipated. [1][13-18][46-48][123-125]
Overall Assessment

There is strong consensus among the participants that robust, integrated power‑and‑cooling infrastructure and modular, prefabricated construction are pivotal for scaling AI data‑centers, especially in high‑density contexts. The agreement spans the moderator’s framing, the CEO’s technical exposition, and the subsequent speaker’s product focus.

High consensus on infrastructure and modular deployment; this convergence signals a clear industry signal that policy makers and investors should prioritize enabling environments, standards for interoperable components, and financing mechanisms that support rapid, scalable AI infrastructure.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The discussion shows strong consensus on the critical role of physical infrastructure, power density challenges, and the strategic importance of India for AI data‑centers. The only notable divergence is in the preferred modular deployment method—Verti Smart Run versus OneCore—reflecting different product emphases rather than fundamental disagreement.

Low. The limited disagreement is technical and product‑focused, implying that stakeholders are aligned on overall goals (rapid, scalable AI‑ready data‑centers) and can cooperate despite preferring different solution pathways.

Partial Agreements
Both speakers agree that faster, more efficient data‑center deployment is essential for scaling AI infrastructure, but they promote different modular approaches: Giordano highlights Verti Smart Run as a prefabricated solution that reduces deployment time by roughly 85% [91-97], while Speaker 3 describes the OneCore system that integrates factory‑tested power/thermal blocks into a steel building envelope [123-125].
Speakers: Giordano Albertazzi, Speaker 3
Prefabricated, modular solutions (e.g., Verti Smart Run) cut deployment time by up to 85% Vertiv OneCore combines factory‑built power/thermal blocks with a steel shell for rapid, efficient builds
Takeaways
Key takeaways
Physical infrastructure (power and cooling) is the essential foundation that enables AI workloads. AI-driven GPU workloads are driving extreme rack power densities (up to 150 kW or more), requiring new power and thermal architectures. Modular, prefabricated data‑center solutions (e.g., Verti Smart Run, Vertiv OneCore) can reduce deployment time by up to 85% and improve quality and scalability. India is viewed as a strategic hub for AI infrastructure because of abundant power, favorable demographics, and Vertiv’s long‑standing presence; Vertiv plans to expand capacity there. Partnerships, especially with NVIDIA, are critical for delivering optimized reference designs and integrated solutions for AI data centers.
Resolutions and action items
Vertiv will continue to expand its capacity and presence in India to support growing AI demand. Vertiv will promote and deploy its OneCore modular infrastructure and Verti Smart Run prefabricated solutions for faster data‑center roll‑outs. Vertiv will maintain and deepen its collaboration with NVIDIA to create reference designs tailored for AI workloads.
Unresolved issues
Specific technical road‑maps for adopting 800‑volt DC power architectures across existing data‑center fleets were not detailed. Practical implementation strategies for large‑scale heat‑reuse or heat‑extraction beyond the conceptual level were not addressed. Potential challenges related to labor, supply‑chain constraints, or regulatory approvals for rapid modular deployments were not discussed.
Suggested compromises
None identified
Thought Provoking Comments
The majority of AI conversations focus on what AI can do, but there is an important physical part of AI that is often overlooked, and it shouldn’t be because that physical part makes AI actually possible.
Highlights a neglected dimension of AI—its reliance on underlying power and cooling infrastructure—shifting the conversation from software capabilities to the essential hardware foundation.
Sets the stage for the rest of the talk, prompting a deeper dive into data‑center power, cooling, and design challenges rather than staying on AI applications alone.
Speaker: Giordano Albertazzi
Human intelligence happens in the brain, but the brain doesn’t survive without a body. Vertiv provides that body so the AI ‘brain’ can function and produce intelligence.
Uses a vivid biological analogy to make the abstract relationship between compute and infrastructure concrete, helping the audience grasp the interdependence of hardware and AI.
Reframes the infrastructure as a living system, encouraging listeners to think of power, cooling, and networking as integrated rather than isolated components, which underpins later points about orchestration.
Speaker: Giordano Albertazzi
What used to be a rack with 10‑20 kW is rapidly becoming 30, 50, 150 kW per rack, and possibly 1 MW per rack in the future.
Quantifies the extreme densification trend, illustrating the magnitude of the engineering challenge and moving the discussion from abstract to measurable reality.
Triggers a shift toward discussing new power‑train architectures (e.g., 800‑volt DC) and advanced cooling solutions, steering the conversation toward technical innovations needed to handle such densities.
Speaker: Giordano Albertazzi
The unit of compute is no longer the server; it is the pod, and ultimately the entire data center operating as a single computer, potentially reaching gigawatt scales.
Challenges the traditional view of data‑center architecture, proposing a paradigm where the whole facility is treated as one massive compute entity.
Leads to a discussion of modular, converged infrastructure and reinforces the need for integrated, scalable designs, paving the way for the later mention of Vertiv OneCore.
Speaker: Giordano Albertazzi
Verti Smart Run reduces the time to deploy a data center by almost 85%, an order of magnitude faster than traditional builds.
Introduces a concrete solution—prefabricated, factory‑tested modules—that directly addresses the speed and labor challenges highlighted earlier.
Provides a tangible example of how Vertiv is responding to the scaling challenge, setting up Speaker 3 to elaborate on the OneCore modular approach.
Speaker: Giordano Albertazzi
India has abundant power, the right demographics, and is emerging as a global AI hub; Vertiv will invest heavily and expand capacity there.
Links the technical discussion to market strategy, emphasizing geographic opportunity and positioning India as a critical node in the AI infrastructure ecosystem.
Shifts the tone from technical to strategic, prompting the audience to consider regional implications and leading Speaker 3 to reinforce the modular solution for rapid deployment in markets like India.
Speaker: Giordano Albertazzi
Vertiv OneCore combines the advantages of traditional sequential builds and prefabricated modular construction, inserting power and thermal building blocks into a steel shell for faster, more efficient deployments.
Synthesizes earlier points about density, speed, and integration into a single product narrative, offering a concrete pathway to the envisioned future data‑center architecture.
Acts as a turning point that moves the discussion from problem framing to a specific solution, reinforcing the earlier claims about prefabrication and modularity while expanding the conversation to product-level details.
Speaker: Speaker 3
Overall Assessment

The discussion pivots around Giordano Albertazzi’s reframing of AI from a purely software narrative to a hardware‑centric one, using analogies and quantitative trends to expose the looming challenges of power density and cooling. His emphasis on modular, prefabricated solutions and the strategic focus on India introduces both technical and market dimensions, which Speaker 3 then capitalizes on by presenting Vertiv OneCore as a concrete answer. These key comments collectively shift the conversation from abstract AI potential to concrete infrastructure strategies, deepening the analysis and steering the audience toward actionable solutions.

Follow-up Questions
How can the physical infrastructure (power, cooling, thermal chain) be optimized to support the rapidly increasing power density of AI workloads?
Albertazzi highlights the importance of the often‑overlooked physical part of AI and the challenges of extreme densification, indicating a need for deeper exploration of infrastructure optimization.
Speaker: Giordano Albertazzi
What are the technical and operational implications of migrating data center power architectures to 800‑volt DC systems?
He mentions the industry’s shift toward 800‑volt DC power, suggesting further investigation into its benefits, challenges, and deployment strategies.
Speaker: Giordano Albertazzi
How can waste heat generated by high‑density AI compute be effectively captured and reused?
Albertazzi points out heat reuse as a critical component of the thermal chain, indicating a research gap in efficient heat recovery solutions.
Speaker: Giordano Albertazzi
What are the cost, scalability, and performance impacts of using prefabricated, modular data center construction that reduces deployment time by up to 85%?
He cites a dramatic reduction in deployment time with prefabrication, prompting a need for detailed analysis of its broader implications.
Speaker: Giordano Albertazzi
What factors make India a strategic hub for AI data center investment, and how can power availability and demographics be leveraged?
Albertazzi emphasizes India’s favorable power and demographic profile, suggesting further market‑size and feasibility studies.
Speaker: Giordano Albertazzi
What are the design principles and real‑world performance results of Vertiv OneCore’s integrated power‑thermal infrastructure solution?
Speaker 3 introduces Vertiv OneCore as a hybrid of traditional and modular builds, indicating a need for more information on its architecture and outcomes.
Speaker: Speaker 3
How will the shift from server‑centric to pod‑centric and eventually whole‑data‑center compute architectures affect management, scalability, and reliability?
He notes the evolution of the compute unit, implying research into new operational models and control frameworks.
Speaker: Giordano Albertazzi
What reference designs does Vertiv provide that are specifically optimized for AI workloads, and how do they compare to competing solutions?
Albertazzi mentions partnering with NVIDIA on reference designs, suggesting a need to evaluate these designs against industry alternatives.
Speaker: Giordano Albertazzi

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Climate-Resilient Systems with AI

Building Climate-Resilient Systems with AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel was convened to explore how artificial intelligence can be harnessed to address the intertwined challenges of development, climate mitigation and adaptation, which Uday described as “the triple challenge” [11-13]. He introduced the Green Artificial Intelligence Learning Network (GRAIL) as a not-for-profit effort to create a collaborative network linking academia, industry, governments and philanthropies to scale AI-driven climate solutions [29-32][54-55]. The urgency was emphasized by comparing the limited time of the session to the narrow window for climate action [17-19].


David Sandalow highlighted that AI already shows significant potential to cut greenhouse-gas emissions, estimating that AI-related computing accounts for less than 1 % of total emissions [155-157]. He categorized AI impacts into incremental efficiency gains and transformational advances such as new materials and battery chemistry [148-154]. The main barriers he identified were lack of high-quality data, shortage of trained personnel and the need for trust in AI systems [158-162]. He also warned that real-time AI deployment can introduce safety and security risks that must be managed [206-208].


Google’s climate director described how the company is using AI to improve internal operations-optimizing water leaks, electricity use and data-center energy, and to open-source large satellite and weather datasets for public use such as flood-risk mapping [284-292][298-301]. Spencer Low explained that AI models are being trained to delineate smallholder farm boundaries and identify crops, feeding into India’s digital public-goods platforms for advisory services and climate-resilient agriculture [322-331][332-336]. Nalin Agarwal and Dan Travers outlined programs that pair AI startups with utilities to modernize grids, run pilots, and provide open-source forecasting tools that can reduce reliance on expensive backup generation [366-384][400-408]. Ankur Puri from McKinsey noted that GRAIL is framing AI challenges around operational improvement, strategic foresight, transformation and autonomous operations, and is beginning to quantify both economic and emissions impact of AI solutions [446-466].


Representatives from the Alan Turing Institute and University College London added that AI has already delivered measurable emission reductions in sectors such as shipping, HVAC and cement, and that interdisciplinary “Grand Challenges” are being used to embed AI across campuses and accelerate climate-focused research [527-533][476-485]. Across all speakers there was consensus that scaling AI for climate requires open data, cross-sector collaboration and rapid deployment of pilots, echoing Uday’s call for “radical action-oriented collaboration” [28][538-540]. The discussion concluded that, while time is limited, coordinated AI-driven initiatives can simultaneously create economic value and decarbonize multiple sectors, underscoring the significance of the collaborative effort [533-540].


Keypoints

Major discussion points


The “triple challenge” – development, climate, and AI – and the birth of the GRAIL network – Uday frames the panel around promoting development while creating a sustainable planet and leveraging AI, calling it “perhaps the most important challenge” [11-16]. He then introduces the Green Artificial Intelligence Learning Network (GRAIL) as the vehicle to explore synergies between the development agenda and the climate agenda through AI [30-33][31-33].


A critical lack of cross-sector dialogue and the need for a collaborative ecosystem – Participants note that AI researchers, industrial emitters, and climate experts were “not talking to each other” [46-52]. GRAIL is presented as a “collaborative network” that brings together academia, industry, philanthropy, and governments to scale solutions [64-68]. The 2022 summit gathered 200 people from 115 organisations, produced taxonomies for high-impact sectors, and sparked partnerships with entities such as McKinsey, the World Business Council for Sustainable Development, and national coalitions [71-79][84-89].


AI’s quantified climate impact, its modest carbon footprint, and the barriers to wider adoption – The report cited by David Sandalow finds that AI can deliver “significant potential to contribute to reductions in greenhouse gas emissions” while its own emissions are estimated at “less than 1 percent of total GHGs” [146-158][154-156]. The main obstacles are “lack of data, lack of trained personnel, and trust” [158-162].


Sector-specific AI use cases highlighted by the speakers


Power and grid: AI can detect methane, predict weather for renewables, optimise power flows, and simulate battery chemistry; the grid’s growing variability demands AI-driven scheduling [170-188][196-204].


Agriculture & food systems: Google’s AI maps farm boundaries, identifies crops, and detects events (tillage, harvest) to feed public-good platforms for governments and NGOs [306-332].


Data-center and infrastructure decarbonisation: Google’s carbon-free energy goal, optimisation of water and electricity use, and open-sourcing of satellite and flood-risk data [260-270][285-292].


Materials and built environment: AI-accelerated simulation of battery chemistry and material discovery, with transformational gains cited [218-221].


A urgent call for “radical, action-oriented collaboration” – The moderator repeatedly stresses the limited time to act on climate (“very little time we have to do something about climate change”) [17-20] and frames the session as “an invitation for radical action-oriented collaboration” [28-29]. This urgency is reiterated later (“we are inevitably vastly behind schedule… we will keep going with great focus”) [355-357] and closed with a plea for collective effort [538-540].


Overall purpose / goal


The panel is designed to catalyse rapid, large-scale collaboration among AI researchers, industry leaders, governments, and NGOs to develop and deploy AI-driven solutions that simultaneously advance economic development and achieve climate mitigation and adaptation. By showcasing the GRAIL initiative, sector-specific pilots, and partnership models, the discussion aims to move from analysis to concrete, scalable action.


Overall tone


The conversation is high-energy, enthusiastic, and forward-looking, with frequent expressions of excitement and gratitude (“very exciting sessions,” “your energy… infectious”). It carries a strong sense of urgency (“very little time,” “we are behind schedule”) and a collaborative spirit (“radical collaboration,” “we’re all in this together”). While the tone remains optimistic about AI’s potential, it also acknowledges challenges (data gaps, trust issues) and uses occasional humor (“apology… I don’t apologize”) to keep the mood lively despite the time pressure.


Speakers

Uday Khemka – Moderator/Host; involved with the Green Artificial Intelligence Learning Network (GRAIL) and championing AI-climate collaboration [S21][S22].


David Sandalow – Professor; former senior government official now focused on AI solutions for climate change; speaker on AI-climate mitigation and adaptation [S7].


Ankur Puri – Partner, McKinsey & Company (India); leads Quantum Black (McKinsey’s AI practice) and works across energy, built environment and other sectors [S1][S2][S3].


Adam Sobey – Director for Sustainability, Alan Turing Institute (UK’s National AI Institute); leads the institute’s sustainability mission and AI-for-environment research [S4][S5][S6].


Dan Travers – Founder/Representative, Open Climate Fix (non-profit AI-for-grid startup); focuses on AI-driven grid modernization and renewable integration [S8][S9].


Vrushali Gaud – Global Director of Climate Operations, Google; leads Google’s decarbonisation, water and circularity strategy and oversees climate work across data-centres, clean-energy procurement and AI-enabled sustainability initiatives [S14][S15][S16].


Spencer Low – Google representative (likely in the Agriculture & Food Systems team); works on AI-driven agricultural landscape understanding, digital public goods and climate-resilient farming solutions [S17].


Nalin Agarwal – Founding Partner, Climate Collective; partners with UNESA and supports AI-enabled grid modernization and startup incubation across the Global South [S18][S19].


Speaker 1 (Rob) – Representative from University College London (UCL); discusses UCL’s Grand Challenges, AI-enabled sustainability research across campus, built environment, cement, aviation and sea-ice classification [transcript].


Additional speakers:


Sean – Mentioned as a time-keeper/organiser; no specific role or expertise detailed in the transcript.


Full session reportComprehensive analysis and detailed insights

The session opened with Uday Khemka framing the discussion as a response to the “triple challenge” of development, a sustainable planet and artificial intelligence (AI)-“perhaps the most important challenge any of us will face in our lives”-and warning that the panel’s brief time mirrored the narrow window for climate action, a point he illustrated with a “two-hockey-sticks” metaphor describing the rapid rise of AI alongside accelerating climate risks [11-13][17-19]. He positioned the gathering as “an invitation for radical action-oriented collaboration” [28-29].


Khemka then introduced the Green Artificial Intelligence Learning Network (GRAIL), a not-for-profit initiative that seeks to uncover synergies between development and climate agendas through AI [30-33]. He explained that GRAIL aims to create a “collaborative network of great academic institutions, commercial institutions, AI companies, industrial companies, philanthropic institutions, private-sector sustainability networks… bringing them all together with governments” [64-68]. The 2022 summit that preceded the panel brought together 200 participants from 115 organisations, produced taxonomies for high-impact sectors and forged partnerships with entities such as McKinsey, the World Business Council for Sustainable Development (representing 24 % of world revenues and 26 % of global GHGs) and national coalitions [71-79].


A recurring theme was the historic lack of dialogue between AI researchers, industrial emitters and climate experts. Khemka noted that “people were not talking to each other” despite extensive outreach to both the AI community and heavy-emitting sectors, with only a few exceptions such as Google [46-52].


Uday Khemka presented the Grantham Institute’s estimate that data-centre emissions amount to 0.5-1.4 Gt CO₂e, while AI-enabled solutions could avoid 3.5-5.4 Gt CO₂e, indicating a net-positive balance [62-63]. David Sandalow later observed that this study “tracks with” his own analysis that AI-related computing accounts for less than 1 percent of global greenhouse-gas emissions [146-156][155-157].


Sandalow outlined a four-pillar AI capability framework-detect, predict, optimise and simulate-and showed how each function can be applied to climate challenges. He illustrated pattern detection for methane leaks, weather prediction for renewable generation, optimisation of power-flow and simulation of battery chemistry [170-188]. He also warned that real-time AI deployment can cause security and safety risks, and cautioned that generative AI in grid contexts requires particular care [206-208].


Sector highlights


Power & grid: AI can detect methane emissions from satellite data, forecast weather for solar and wind farms, optimise transmission flows and simulate battery behaviour [170-188][196-204][400-408]. Dan Travers stressed that the modern grid’s variability-driven by millions of distributed generators, electric vehicles and data-centre loads-requires AI-driven real-time scheduling to avoid blackouts and costly backup generation [400-408].


Agriculture: Spencer Low described AI models that delineate smallholder farm boundaries, identify crops and detect events such as sowing or harvest; these outputs now feed India’s Krishi DSS and state-level platforms, enabling NGOs and governments to provide climate-smart advisory services [318-332][322-329]. He highlighted the open-source nature of the data as a “digital public-goods” resource for innovators [335-338].


Built environment & materials: AI-accelerated simulation enables rapid testing of battery chemistries and novel materials, offering transformational gains that could dramatically cut emissions [218-221]. Rob (University College London) explained that UCL’s Grand Challenges programme-a self-funded, cross-faculty initiative spanning all 11 UCL faculties-embeds AI across campus energy management, cement-process optimisation, real-estate sustainability and sea-ice monitoring [476-485][486-492].


Google’s Global Director of Climate Operations, Vrushali Gaud, announced the launch of a Google Center of Climate Tech in partnership with the Principal Scientific Advisory of the Government of India, targeting low-carbon steel, sustainable aviation fuel and green-skill development in tier-2 cities [350-353]. Internally, Google pursues a “carbon-free energy” goal for its data-centres, optimises water-leak detection, electricity use and grid interactions, and seeks to site facilities with minimal community impact [260-276][284-289][298-304]. Externally, it open-sources large satellite and weather datasets (e.g., Earth AI, Flood Hub) that enable flood-risk mapping and other climate-resilient services [292-295][290-295]. The partnership with UNESA adds 71 energy companies representing 750 GW of clean power with a target of 1 500 GW by 2030 [298-304].


Nalin Agarwal, co-founder of the Climate Collective, described an “open-innovation programme” that pairs AI startups with utilities across the Global South. Since its inception, the programme has engaged 22 utilities, generated roughly 20 pilots (with a 30 % conversion to large-scale deployments), and is evolving into a three-component platform-an open-innovation pipeline, a knowledge hub and an online solution database-forming an AI-for-Power Innovation Platform [364-390].


Dan Travers of Open Climate Fix argued for fully open-source, non-profit AI tools to ensure transferability across grids. He reported that his team has built what they consider the best solar-forecasting model in the UK (20-30 % more accurate) and are now deploying it in India with partners such as Adani and the Rajasthan Grid Operator [416-420][418-420]. This open-source stance contrasts with Google’s more selective data-sharing approach, hinting at a tension over the degree of openness required for rapid scaling [284-289][416-420].


McKinsey’s Ankur Puri framed GRAIL’s work around four strategic challenges-operational improvement, strategic intelligence, transformation and autonomous operations-and announced ongoing efforts to quantify both economic and emissions impact of AI use-cases. By attaching monetary and carbon-value metrics, McKinsey aims to direct scarce resources toward the highest-impact interventions [446-466].


The Alan Turing Institute’s Adam Sobey provided concrete evidence of AI’s near-term climate benefits: an 18 % emissions reduction in shipping, a 42 % cut in building HVAC emissions and the creation of a renewable-energy-powered underground urban farm [527-530]. He stressed that these achievements are possible only through global collaboration, noting support from the Lloyd’s Register Foundation for work in the Global South [518-525][531-532].


Across the panel, several points of agreement emerged. Both Khemka and Sandalow affirmed that AI’s net climate benefit exceeds its own emissions [62-63][146-156]; multiple speakers highlighted the necessity of “radical, action-oriented collaboration” [28-29][355-357]; and the four AI capabilities (detect, predict, optimise, simulate) were repeatedly cited as the technical backbone for sectoral applications [170-188][400-408][318-332][284-289]. Disagreements centered on openness and sector prioritisation: Travers advocated fully open-source tools, whereas Google’s strategy remains partly proprietary; Khemka called for a broad, multi-sector effort, while Low foregrounded agriculture, Travers emphasised grid reliability and Puri urged data-driven prioritisation rather than a fixed sector focus [54-59][311-332][400-408][464-467].


Thought-provoking remarks shaped the dialogue. Khemka’s observation that “people were not talking to each other” exposed a systemic silo problem [46-52]; the Grantham Institute’s emissions balance countered common criticism of AI’s carbon cost [62-63]; Sandalow’s concise AI capability taxonomy provided a memorable framework [170-173]; his identification of data, talent and trust as key barriers shifted attention to capacity-building [158-162]; Gaud’s description of Google as a “full-stack” climate actor illustrated how a tech giant can embed sustainability across its value chain [260-276]; Low’s demonstration of AI-driven farm-boundary mapping linked climate mitigation to livelihoods [318-332]; Travers’s warning that without AI the grid will face blackouts and public backlash highlighted the social dimension of technical solutions [400-408]; and Puri’s emphasis on quantifying impact introduced a disciplined, measurement-first approach [446-466].


The panel concluded with actionable recommendations: participants were urged to join GRAIL’s online collaborative platform, deepen engagement with governments, scale the Climate Collective’s AI-for-Power Innovation Platform, and finalise McKinsey’s economic-emissions quantifications [354-360][364-390][446-466]. Google committed to operationalise its Center of Climate Tech in India, focusing on low-carbon steel, sustainable aviation fuel and green-skill development for tier-2 cities [298-304][350-353]. Open Climate Fix pledged to expand its open-source solar-forecasting tools to additional grids [416-420]. Unresolved issues include securing standardised high-quality data for the Global South, building a scalable pipeline for AI-climate talent, establishing governance for generative-AI safety, and developing sector-specific roadmaps for the built environment, industry and transport [158-162][298-304][322-329][400-408][464-467].


Follow-up questions that chart a roadmap for the collaborative effort: optimal siting of data-centres to minimise community impact (Gaud); mechanisms to democratise data and accelerate AI scaling (Gaud); strategies for embedding green skills in tier-2 Indian cities (Gaud); approaches to address data scarcity in the Global South (Sandalow); pathways for training AI-climate personnel (Sandalow); methods to build trust and explainability in AI models (Sandalow); standards for power-sector data to enable AI tools such as dynamic line rating (Sandalow); security and safety safeguards for real-time AI grid operations, especially generative AI (Sandalow); scalable farm-boundary mapping techniques (Low); frameworks for evaluating economic and emissions impact of AI solutions (Puri); design of an AI-for-Power Innovation Platform (Agarwal); integration of AI into the built environment, materials and transport (Khemka); and expansion of AI-driven flood-risk mapping (Gaud) and solar-forecasting tools (Travers) [298-304][322-329][350-353][158-162][206-208][311-332][464-467][416-420].


Session transcriptComplete transcript of the session
Uday Khemka

Very exciting sessions. I’ll just wait. Guys. So we are meeting for a tremendously important subject. And this has been a great summit. I know you’re all energized, inspired, excited and exhausted at the same time. And we will get a moment when this subject becomes a room of 5 ,000. So that’s what we’re going to work towards. But you’re here today. We’re delighted to have an absolutely tremendous panel with us today. I’m deeply honored, flown across from the U .S., from Europe, from Singapore and so forth. And we have a lot of material to cover. I should say that the triple challenge that we are dealing with in this panel is perhaps the most important challenge any of us will face in our lives.

Which is to promote development on the one side. While dealing with the creation of a sustainable planet. and in terms of climate change, your self -selecting group, you’re all here with us and there’s a reason for that. You already know about climate change. You already know about AI. Is that, as you know, we are not necessarily winning the battle on climate as yet and so we need to deal with both mitigation and adaptation and this panel will address both of those two things. We have very little time in the panel so I am going to speed along but that’s a good metaphor for the very little time we have to do something about climate change.

So we’re in action mode. It’s a call for collaboration. We’re not going to be, I apologize to our speakers and panelists for a number of things. One, this is not a real panel. We’re not going to be having discussions. This is just boom, boom, boom, talking to you about what everyone’s doing. Secondly, there’s going to be a kind of switcheroo moment when some other speakers come up and some of us are replaced up here. Apologies for that. It’s just the intensity of the panel and for all those things I apologize in advance but I don’t apologize. for the incredible quality of our panelists today. These are amazing people. And I would just end by saying that this is not a normal session.

This is an invitation for radical action -oriented collaboration with all of you. On that basis, let me begin by talking a little bit about a summit we held last year in London and the background to it and an organization that some of us are very deeply involved with and almost everyone here is a friend of called the Green Artificial Intelligence Learning Network. You will immediately note it has the cutesy acronym of GRAIL, like the Holy Grail. And what we’re trying to do is really see what the synergy is between the development agenda and the climate agenda through the application of AI. I’m going to speed through this. We’re going to then move to Professor David Sandelow, who actually anchored our summit last year, which is the first major global summit on the application of AI, to climate change.

and has very kindly flown in from Colombia. I’m going to ask our speakers one more favor, which is instead of my coming up and introducing all of you, if you don’t mind introducing yourselves, that will speed us along the way. So let’s go through this. So as you all know, and perhaps Professor Sandler will talk about it, the IPCC gave us a target, 43 % decarbonization from 2019 levels to 2030 levels. We were meant to reduce GHG emissions by that amount. In 2021, some of us at COP26 in Glasgow had a meeting to look at the likelihood of that happening. And we came to the conclusion that the likelihood was very low. And therefore, traditional approaches to climate mitigation and adaptation needed to be enhanced with new solutions.

And we thought, what was J -curving as fast as climate change was J -curving? And the only thing… There was nothing we could think of. There was nothing we could think of. There was nothing we could think of. was the application of AI, this great new suite of technologies, including, of course, quantum and all the other things that go with it. And we started to talk to people, and we talked to a whole bunch of people in the AI community, a whole bunch of people in the industrial and power, automotive, all the different sectors that produce emissions. And we said, are you talking to each other? And shockingly, people were not talking to each other.

There’s very little going on with some honorable exceptions at Google. Very few people were really in the AI community focusing on downstream issues around climate change. And similarly, the big industrial domains were not really focused on the use of AI for decarbonization and economic value creation. So with that lens, think about this session as throwing one J curve against another J curve. Can we throw the crazy increase in AI technology represented by this great summit against the world’s greatest challenge? That’s the purpose of the Grail organization, which is a not -for -profit based in London. It’s a vast terrain. We don’t have time to cover it all. It’s obviously mitigation. It’s obviously adaptation. We have to hit both.

And within that, there are endless taxonomies of all the wonderful things that AI can do. And, of course, you’ll be worried about the increased Gs from data centers, but that’s not the primary focus of our session today. That has been quantified. The Grantham Institute did a quantification last year of 0 .5 to 1 .4 gigatons of extra GHGs from data centers, but that’s from every kind of utilization against the potential benefit of 3 .5 to 5 .4 gigatons being sucked out of extra GHG emissions. So there’s clearly a very strong balance towards what AI can do to helping the planet in its shift towards a clean and green economy. Grail, what’s Grail? Grail is an attempt to create… …to create a collaborative network.

of great academic institutions, commercial institutions, AI companies, industrial companies, philanthropic institutions, private sector sustainability networks like WBCSD, bringing them all together with governments to try and create massive collaboration. In the next slide at the bottom, you see that same group of institutions. Bottom right, the ideas and deal flow. Going back into Grail, bottom left, the fact that this becomes a collaborative community to get all these solutions scaling at speed and at the top, then getting that deal flow funded through grants, through government programs, through venture capital, corporate funds, but to move the agenda to real solutions at massive scale as quickly as we possibly can. All of this led to a summit that occurred that I mentioned earlier last year.

Sean, will you keep me real on the time? Thank you. Okay. And that led to… 200 people. 115 organizations, including all the organizations represented here today, 60 speakers, and we looked at AI for power, AI for building materials, AI for everything you could think of vertically and horizontally, looking at the issues of materials innovation, looking at the issues of value chains, looking at carbon markets, and so forth. What has happened after the summit? Three things. One, we’ve created an online collaborative platform, and we invite all of you to join it, to co -create those solutions that can make a difference. Second, we’ve started to engage with governments around the world. Imagine a summit like this that was focused, yes, on development, but with a central climate focus as well.

How amazing would that be? And most importantly, we’re focusing on taxonomies that lead to massive calls for action from the innovation community. So we started to work on taxonomies for a variety of sectors, the energy sector, the built environment, materials innovation, and we worked with groups of AI experts and power experts and figured out what the big wins were, what are the big opportunities for companies to create economic value while at the same time massively decarbonizing. And this was an intellectual process, including many experts, some of whom are here today, that led to this astonishing work and identified the big win -win opportunities for economic value creation and decarbonization. On the bottom right, the teams from academia, industry, industry associations, a variety of other people and countries, eight country teams as well were involved in various ways.

So where’s this all going? Well, thank you to McKinsey for your very kind collaboration to after we had done all that work saying, hey, we want to help and kicking in and working with us to further refine those offerings and look at cost benefits and cost curves and all sorts of things. Delighted about it. And then there are, apart from working on the power sector, to look at generally what we can do, apart from working on the built environment, generally what we can do, apart from looking at materials innovation, generally what we can do to accelerate solutions for decarbonization through AI. We have two big partnerships that are emerging. One, and I want to slow down here.

Okay, 250 companies are part of the World Business Council, their network. That represents, in scope one, two, and three, 26 % of World GHGs. They’ve realized that that’s mainly in supply chains who are going into a partnership, so is McKinsey, so are other partners, to look at what are the AI opportunities to take startup and scale -ups into these decarbonization opportunities at massive scale with the 250 largest companies in the world representing 24 % of world revenues and 26 % of World GHGs. Finally, working with coalitions of energy companies, and Nalin, you’ll hear, and we’ll hear from later on, and we’re deeply partnered with Nalin. on this and others on this. How can we take this into accelerating? For example, UNESA has 71 energy companies, 750 gigawatts of clean power.

They want to go to 1 ,500 gigawatts of clean power by the end of the decade. How can we help them with AI? It’s a very practical lens. We invite you to join us and be part of this. On that note, I would like to briefly to invite to introduce Professor David Sandlow. David, I’m not going to go through your very distinguished background to take the whole panel to do that, except to say that you have worked in every different field, most importantly in the past in very senior positions in government, but now, of course, you’ve been the luminary on AI solutions on climate. That is the worst introduction you’ve ever got. I’m sorry about that, just in the interest of time, but I’m…

Really very honored that you’ve flown all the way to come here, and I’m handing them over to you. to you.

David Sandalow

Thank you so much. Uday. Uday, thank you. Your energy, your enthusiasm, your passion, they’re all infectious. And your intellect is remarkable. What you’re driving forward in this area is world changing. You are not just an inspiration, you’re a gravitational force that are pulling people together to work on this, so thank you so much. Your, what you did in London was remarkable. What you’re doing here is incredible and I’m looking forward to being part of what you’re doing in the future. So I’ve, it’s been my privilege to lead some teams that have been working on these issues over the course of the past couple of years and I’m going to talk about one of the projects that we’ve done.

It looks like the slides are not there. There’s a certain, turning on the screen. There it goes. I will say that while we wait, I’ll say that I really like the metaphor that you had, Uday, about two hockey sticks. And this is just a remarkable convergence of two of the most important trends that are happening in human history right now. One of them is, alas, the increase in greenhouse gas emissions, which is happening at such an astonishing pace. But the second is the exponential growth of the capacities of artificial intelligence. What’s driving me is we need to find a way to make sure that that second trend, artificial intelligence, helps to solve the first problem.

And that’s the study that we did, which we brought together a team of 25 experts. Just wonderful people. One of them was . Song Lee, the head of the IP, last head of the Intergovernmental Panel on Climate Change, and some top experts. And the question we asked was, it was very simple, how do you use artificial intelligence to reduce greenhouse gas emissions? There it is. It came up on the screen. Thank you so much. I appreciate it. And so it’s a very simple question. How do you use AI to reduce emissions of greenhouse gases? We came up with 17 chapters. We wanted to do more than just provide analysis. We wanted to provide actionable ideas for what to do.

So every chapter has recommendations. You can find a print version available on Amazon, and free downloads of the entire volume, including chapter -by -chapter versions, are available at these websites. I want to thank the government of Japan, including NITO and MEDI, for supporting this work. They’ve been very important supporters of work on AI and clean energy more broadly. Oh, I’m going to promote my podcast later. But I have a podcast that’s talking about this topic as well. But so here’s our – Here’s our – Here’s the table of contents for our work. We talked – We have introductions to both AI and climate change in this volume. One of the things we’re trying to do is target this both to experts and to people who are beginners in this topic.

And, you know, Uday talked about bringing together different communities. One of our basic conclusions was we need to bring together experts in climate change and experts in AI. And there are a lot of people who know a lot about climate change but don’t know a lot about AI. A lot of people who know a lot about AI, they don’t know about climate change. So we decided to have primers on each of those topics. And then we talk about eight different sectors and a number of cross -cutting topics. So we have five key takeaways. This was an interesting exercise with all of our authors taking 300 pages and trying to distill it into five key takeaways. But here’s what we came up with.

The first one, I mean, this is a kind of bottom line, but it’s important. AI does have significant potential to contribute to reductions in greenhouse gas emissions. And we categorized it with two categories. One is climate change. It’s incremental gains such as just improving efficiency. It’s output itself. And then we have the other category, which is the environmental impact. solar farms, building energy efficiency. There are lots of incremental gains that can be made, but also transformational gains. In particular, new tech, new materials and other things. We looked at whether greenhouse gas emissions are causing increases in, or greenhouse gas emissions are increasing as a result of computing operations. We decided, based on the available evidence, that the best estimate is less than 1 percent and maybe much less than 1 percent of greenhouse gas emissions are currently coming from AI.

That tracks with the Grantham study that Uday talked about. That tracks with what the IEA has said as well. The main barriers to AI’s impact in reducing greenhouse gas emissions are a lack of data and a lack of trained personnel. There’s other barriers as well, but obviously you need data. A lot of places we don’t have the data for this purpose and you need people. Trust is essential. People aren’t going to use AI unless they trust it. And then every organization with a role in climate change mitigation should consider opportunities for AI. And we need AI to contribute to its work. I think as AI grows in the public… consciousness at summits like this that’s becoming less and less of a kind of radical recommendation.

But it’s just so important. I think if you’re working in climate mitigation, you need to have a team dedicated to AI and how AI can help. So I’ll just run through quickly because we only have a little bit of time, some of our chapters. We have a chapter on the introduction to AI, and if you’re a climate person that doesn’t know a lot about AI, this might be helpful to you. And one of the things we did is we broke down AI capabilities into four basic categories at a very high level. The first thing AI can do is detect patterns. And how can that be helpful in climate change mitigation? Well, one example is detecting methane emissions from satellite data.

You know, some of you probably know this, but I mean, we know much, much more today than we did 10 years ago about methane emissions. And that has helped us dramatically to begin to reduce methane emissions. That’s entirely dependent upon the optical sensing process. And we’ve been able to do that over the last 10 years. So, you know, we’ve been able to do that over the last 10 years. And we’ve been able to do that over the last 10 years. So, you know, we’ve been able to do that over the last 10 years. And we’ve been able to do that over the last 10 years. And we’ve been able to do that over the last 10 years. So, you know, we’ve been able to do that over the last 10 years.

And we’ve been able to do that over the last 10 years. impact so far. AI can also predict, such as weather patterns at solar and wind farms. It can optimize, such as power flows on transmission lines. And it can simulate, such as battery chemistry action. So I think for me, in fact, I’m teaching a course at Columbia right now where we’re emphasizing this framework of detecting, predicting, optimizing, and simulating. And those are, broadly speaking, the capabilities that AI brings to the table. A lot to say about climate change, but just for those who aren’t paying attention, atmospheric concentrations of heat -trapping gases are now higher than any time in human history. In fact, higher than any time in the past three million years.

And July 22nd, 2024, was the warmest day ever recorded. 2024 was the warmest year ever recorded by far. And the warmest 11 years ever recorded were in the last 11 years. So we are living in an era of climate change. We do deep dives into a number of different sectors. I’m just going to talk about a few of them. Power sector is… is maybe the most important just because it’s already 28 % of greenhouse gas emissions, and our strategy for reaching decarbonization requires us to electrify lots of things. So we need to grow the power sector and decarbonize the sector at the same time. I don’t think we’re going to be able to do that without AI tools. AI is already helping decarbonize the power sector, optimizing location of generation transmission, increasing output at solar farms, but it can do much more.

Dynamic line rating is optimal power flow analyses. But to do this, we need standardized data. We need trained personnel. The utility business model is a challenge. So this is a really important area that requires a lot of attention and work. Oh, and a final point, the last bullet on this slide. Using AI in real -time operations can cause real security and safety risks. So we need to be very careful about generative AI in context. So even as we look to deploy AI to help reduce greenhouse gas emissions, we need to be very attentive to these risks. risks. I kind of find it amazing how few people pay attention sometimes to food systems and climate change, that 30 percent or more of greenhouse gases are in some way related to the food system, and the food system has, is threatened by climate change.

AI can do a lot to improve both mitigation and resilience in the food system. We just, there’s a few examples here, integrating data from soil sensors to create fertilizer management plans, creating virtual farms. There’s lots of things that can be done here. But coming back to this issue of lack of data, it’s a huge problem, especially in the Global South. So the efforts to build up a digital public infrastructure that are happening here in India are so important in this regard. I’m going to go quickly here. We look at buildings where there’s tremendous potential. I think materials innovation is one of the most important areas. And, you know, 150 years ago, when Thomas Edison invented the modern light bulb, he literally spent a year, he spent a year, he spent a year, he spent a year, he spent a year, he spent a year, he spent a year, he spent a year, running electricity through dozens, I think hundreds of different filaments to figure out how much light and heat would be produced.

So today, we can simulate a million of those interactions in a second. And there’s already tremendous advances in the pace of innovation in battery chemistry and some other areas using AI tools. And for me, this is one of the most promising areas in terms of transformational gains in reducing greenhouse gas emissions. Extreme weather response is extremely important from a resilient standpoint, and we don’t have a lot of time to get into it, but I think that the AI ML -enabled forecasting is transformational because it’s so much cheaper, for example. I mean, at really 1 ,000x the cost, 1 ,000x less the cost, we can run AI ML weather prediction tools and make a big difference on extreme weather response.

We have findings and recommendations throughout this report. You can see it here, again. We just did a new report in the same series on sustainable data centers. And our main message is there are that with this data center construction boom happening now, this is the time to be paying attention to data and sustainability. We are investing right now in multi -decade assets. We need to be paying attention to this. Smart siting is a key. And finally, here’s a plug for my podcast. It started about a year ago. I’ve had some great guests, Jensen Huang, Dami Lola, Ogunbi, the head of sustainable energy, for all Jennifer Granholm, the U .S. Energy Secretary under Biden. Listen, as they say, available on all major podcast platforms.

Uday, once again, thank

Uday Khemka

I feel horrified we’ve got speakers of this caliber and so little time. So thank you so much for your leadership. May I invite both of you to speak? You can speak from here if you prefer. We’ve got two great leaders from Google. Obviously, you know, in the sphere. of corporate AI leadership on climate, there is no one that parallels all of you and we look forward to hearing from you your thoughts. Thank you.

Vrushali Gaud

Thank you very much for hosting this. I don’t know if that’s a privilege or that’s pressure when you start with that sentence about the leadership position Google has. I just want a quick question. Raise of hands, how many of you have used Google today for either maps or searching something? Thank you. So you know who we are. This is my car. I’m Vrushali Gaud. I’ll introduce myself and then Spencer you can answer that. I lead Google’s in a nutshell decarbonization water and circularity strategy for the company. Essentially what that means is I’m responsible for quite a few things that you had in your slide that we should be doing and a good way to introduce myself is also I like getting things done and so I feel like my inner calling around this is we’ve had a lot of conversations we’ve had a lot of playbooks and research and things But it’s almost like, how do you act on it?

How do you execute on it? How do you start delivering the outcome that I think we all are looking for? So that’s the kind of space that I come from. And a privilege to be at Google, who allows us to kind of expand that space. So the reason I asked you all to show your hands, most of you know Google as a search or map and similar pieces, information source. One of the other things Google is now, I think, as a company, is a full -stack company. And when I say full -stack, that means the search and the information is a top layer of it. But underneath that sits the entire physical infrastructure that drives that.

And so that’s data centers. That’s the way you operate that. That’s the networks that feed into all of the applications. And so when we look at climate, and my title actually is Global Director of Climate Operations. So I say that out of humility because when we look at climate, we’re trying to put it across our operations the best we can. Thank you. So we, and good examples of that, I’ll start with data centers. The big topic right now, how do we operationalize them? Where do we cite them? The location, what impact it has on the community, what impact it has on the infrastructure there. Citing is a big part of it. Access to clean energy is something we’re looking at, and pretty much we have a carbon -free energy goal.

So I think for us, if you look at climate, a big portion of climate is emissions. How do we impact emissions? It’s from electricity. What do we do with electricity? Shift to clean energy. Or renewables. And so that’s the spectrum that we look at. And so a lot of our investments are in carbon -free energy and how we think about it. And it also is more not just take from the grid or expect the government or, you know, sort of the infrastructure to get you there, but how do we invest and bring more clean energy to the grid? I think that’s a big piece of, I think, what companies can do at the speed at which we are all moving is, how do we take these sort of bigger picture systems problems and embrace them and solve them?

So one is big, one is small. is generation of clean electricity, and the other is grid, and how do you solve the grid problems? So that’s the infrastructure of AI. Then using AI, I think that’s going to some of the other things, Professor, you were saying is, we look at how we could use AI to drive our operations more efficiently. It’s very boring pieces. It’s not really shiny superstar things, but a lot of the impact comes just in general. I look at water taps, and I remember the amount of leakages we have on water taps, the amount of electricity wires that are not connected, just the inefficient use of resources is a big one, and how can we use AI to sort of optimize, whether it’s optimizing within the use of our chips, optimizing the grid, optimizing which applications run from where.

That’s a big part of our strategy. And then the third piece is, what do you use AI, and how do you use it for climate? Now, clearly, our business is information and search, but which means we also have access to a lot of data. And so one of the ways we consider, as what you can do in AI is, how do you use these large data sets? A, find a way to open source them, encourage different use of them, but also incubate certain initiatives that can help to show the light to others. So Earth AI is a big one in which we you’ve got satellite images, you’ve got weather data, you’ve got all of these big chunks of information that we can put out there.

And then there’s an application layer, which I think is of interest to you in terms of resiliency or mitigation. So one of the things, you know, which you probably haven’t heard of as much is Flood Hub. So we have a lot of information put out there as to flood risks of different region, which then other companies can use for whatever products they’re launching, whether it’s insurance, whether it’s real estate, fire sat, wildfire risks. How do you do prediction around it? Something all utilities companies, especially from in California, where I’m based, is we’re very passionate about using that for prediction. I can go on about the list of sort of what data can be used and how it can be leveraged.

The thing I think I’m going to go back to the, you know, crux of what you had in this is. is we’re in the timeframe of two hockey sticks. One is the impact on emissions, and I completely appreciate that the tech companies, hyperscalers, data centers are at a scale contributing to it, which we want to obviously help mitigate or replace with clean electricity as much as we can. And then the other is, how do you use the innovation curve on this? And I think we’ve just scratched the surface. And there’s, of course, like, you know, there’ll be trials and errors, but the surface around how do we democratize data, how do we encourage innovation, and how do we scale it very quickly?

Because I think those are the three, the trifecta of how do you drive this change? And so one of the ways I’ll end with saying I’m super proud of what we’ve done this week, trying to bridge those two gaps is we’re working with the Principal Scientific Advisory of the Government of India to launch a Google Center of Climate Tech. We call it Climate Tech because it does, those are the two hockey sticks that you’re trying to get in, right? The tech scale and the climate impact. and we are our goal is to encourage academic research but research that is actionable so five pilots first of all kinds and how you can scale um and there’s a lot of uh you know focus already on electricity so we are trying to do the non -electricity pieces in that which is around low carbon steel low carbon materials built environments um low carbon sustainable aviation fuel and then the biggest one we don’t talk we talk about what i think is a big lever across everything is green skills you need to embed this sort of a thinking which is green climate first across every domain and how can we encourage that in in india and especially the tier two cities so super excited about those two hockey sticks and how we’ve as a company can bridge those gaps

Spencer Low

Intensity of what is produced in this part of the world actually is really important globally. But what’s really distinctive about APAC is actually the third major topic, which is livelihoods. As I mentioned, this is the part of the world which has a lot of developmental ambitions, and livelihoods are key. So my colleague touched on, Vushali touched on Chapter 3, power systems. Actually, I would like to touch on agriculture and food systems, which is your Chapter 4. So agriculture and other land use is actually the largest employment sector in the Asia -Pacific. I mean, I believe in India it’s about 46 % of jobs. And for the region, it’s the largest sector, about the same as the next two sectors added together, which are actually manufacturing and wholesale and retail trade.

Add those together, you get the same number of jobs as in agriculture. Now, over 80%. 80 % of farms around the world, and especially in India and the rest of the global south, are smallholder farms. farmers. And that creates an issue because a lot of the technology for agriculture is developed for large commercial farms, satellite imagery, et cetera. So this is one example I’d just like to delve into in terms of what Google is doing to contribute to the data, the digital public goods that Rushali spoke of. So if you want to use satellite imagery and actually understand agriculture so you can do things with it, you need to find the boundaries of your individual farms. That’s your individual unit and often less than two hectares, if not smaller.

And so you can do that with people poring over maps or satellite imagery, but that’s not scalable. But this is a really interesting problem for AI. And so for those of you who’d like to know more about this, there’s actually an exhibit at the Expo at the Google Pavilion. But this is what we call agricultural landscape understanding and agricultural monitoring and event detection. So we’ve trained AI to actually digitally enhance the environment. The field boundary. and you can say, well, that’s interesting because you can zoom into India and look at the Indo -Gangetic plain and see all the field boundaries. We’ve also trained the model to distinguish what crops are being grown through multispectral imagery.

And with that, we can detect events like tillage, sowing, harvest, et cetera. And all this data is now available. It is part of the Krishi DSS. So it’s contributing to the digital public infrastructure of the Indian government through the Ministry of Agriculture, through state governments, for example, like that of Telangana and the ADEX system. And what this does is it allows NGOs, government bodies, et cetera, to actually give advice to farmers because you now understand what’s going on on the ground, which is a critical driver for mitigation benefits, but also adaptation as actually the planting and the growing of crops. So we’re seeing best practices for planting. and what to plant is actually changing over time with climate change.

So do find out more at the pavilion. But one thing that I like to double -click on, as we say, is actually the innovation part of it. This digital public infrastructure is only helpful if it can be really used. And it’s not just governments and NGOs. It’s also startups. They’re innovating and finding new ways of using this information. So companies like Carbon Farm, they’re in France. They’re using this data. Varaha, which is a social startup, entrepreneurship. But also, Wadwani AI is another startup that we are supporting as well in terms of driving innovation in the agricultural space. So this is really all going to be accelerated through the use of AI, and we’re very excited to contribute to that.

Thanks.

Uday Khemka

Wonderful. You can see – I’m going to just grab one of these. Thank you. So, hello. Yeah, you can see that Google represents the convergence of the two themes we were talking about. And I think you have a wonderful web. At least I’ve seen access to materials about your sustainability strategy online. So if people want to know more, I’m sure they can go there.

Vrushali Gaud

Yeah, I’ll make a plug. Website, sustainabilitygoogle .com. It has all of our information. And the expo booth has all of our information. So thank you very, very much.

Uday Khemka

Now, we are inevitably vastly behind schedule as we are with climate change. However, we’re going to keep going with great focus. And we’re going to turn to the energy and power sector. Now, this is a bit embarrassing because we have to do a little bit of a switchover of people. But we don’t have time to put up the new names here. So we’re just going to announce them and listen with great attention. So we have two fantastic speakers from the energy sector, which, as you know, is one, if not the most important sector. Do you want to come closer, Nalim? And we can just be together here. I would ask, obviously the decarbonization of the energy sector is absolutely critical without that nothing happens so I’m going to hand over straight away first to you if I may Nalin, to set the stage a little bit and what you’re trying to do with Climate Collective and UNESA and then Dan more specifically to what you’re up to so over to both of you and obviously introduce yourself sorry I haven’t done it for you

Nalin Agarwal

no, thank you I understand we are short on time so I’ll keep very brief I’m Nalin Agarwal, one of the founding partners of the Climate Collective I think we’ll have the slides up soon great, so today I’m just going to talk about very quickly a program that we’ve been running for 6 years and where we partnered with Graylon to really drive decarbonization and grid modernization starting with India but across the global south so if I can move on who’s operating that? let me go there to you I’ll do it. Okay, just quick snapshots. We are an ESO enterprise support organization, largest in the global south, about 1 ,500 startups supported. Key partnerships, UNESA is a key one. I don’t want to spend too much time here, but that’s what we’re going to spend some time on as well.

We do a lot of work in AI, in power but beyond. So next week we’re doing the Delhi Climate Innovation Week. In fact, Google is a sponsor and partner there, and of course, Grail is as well. But happy to chat about this later. Here’s what we’re trying to do. I think what’s happening is that a lot of the challenges on renewables are being solved. They will be solved. I think one of the increasing recognitions is that the grid is a key bottleneck now, and we need to really work on grid transformation. That includes both decarbonization and modernization. So that’s what we’re working towards. So we work with utilities. There’s about 22 of those that we’ve worked with so far.

Work on a problem statement approach. Get startups to apply. Select startups. Get them to create business cases and pilot plans and eventually lead to pilots, right? So there’s 22 utilities that have participated and have led to about 20 pilots, a subset of which have become large deployments, right? So it’s a very unique program in the global south, actually. It’s the only one, right? High conversion ratio, so about 30 % of the pilots that have been proposed have come in. Key partners, I mean, 22 utilities and all the people that are working in power sector reform are part of this program. I mean, again, I won’t spend too much time, but there’s a lot of this information available online. All the startups that are vetted, ready to deploy, are available for utilities to engage with.

we have a bunch of case studies also but the key point is this we are now developing this along with Grail into a global AI for power innovation platform which has three components the open innovation program which is electron wipe on the top there is the knowledge hub which is basically a peer sharing platform where we do convenings co -located at COPS at climate weeks etc and then there is an online solution database of pre -wetted solutions. I’ll stop there and hand it over to Dan

Dan Travers

Thanks. Thanks Nalan I’m going to stand up too because I like to stand up and talk so my name is Dan Travers I’m from Open Climate Fix we are a startup doing AI for grid. I’m going to dive a little bit into the grid area which has been talked about a bit in order to get to net zero we need to electrify we need to green the grid and we need to electrify everything the grid of the past had Usually in each country there was tens of generators and the grid operator would know those people on a first -name basis and they would ring them up and tell them when to turn up and down.

We’ve now got millions of generators with solar panels, wind turbines everywhere. The grid of the past had variability from just demand. We’ve now got variability from demand and the wind speed and the clouds, right, so three sources of variability. The grid of the past had a normal demand that we understood well. We’ve now got data centres, we’ve got EVs, we’ve got batteries, we’ve got AC, so the demand is changing shape incredibly. How are you possibly going to address this balancing of this grid with a bunch of people in a room, right? You need AI solutions. You need a highly digital grid. You need something which can schedule and marshal all of these assets in a digital at sort of AI speed.

So… That’s really important. And why is it important… It’s important because if we don’t do it, we’ll have blackouts, and if we don’t do it, we’ll have costs increasing because the way that grid operators are currently dealing with this challenge is they’re actually scheduling a lot of backup generation. It’s usually gas -fired generation. It’s very expensive, so bills are going up. And if you look around what’s happening now, there’s a push back against the green revolution. If we don’t address these problems, we’re going to have a democratic pushback, and we will have a reversal. So AI solutions can really help us in fighting the battle for hearts and minds as well as the actual physical battle.

So myself, I came from sort of banking tech space. Jack, my co -founder, came from Google DeepMind, who the name keeps coming up. We both saw there was a big gap between the amazing tech that was available in some of these industries and grid operators and the electricity industry, which is by nature very risk -averse. It has to be worried about things failing all the time. So we saw the gap between those two, and we formed Open Climate Fix to really try and address that gap. to bridge that, to take sort of moonshot ideas and actually build a rocket ship that is going to fly to the moon and actually implement something and give data to researchers.

The company’s non -profit, we’re open source, and that’s about the scaling, which I think is a key part of the title of this talk. So we’ve built the best solar forecast in the UK, we think, by about 20 % or 30%, like quite a long way. We now want to take that, we are starting to take that to India. We’re working with Adani, we’re working with Rajasthan Grid Operator, and with a combination of open source plus commercial sort of expansion, we see the AI tools as super transferable across grids. So I’m really excited that we can take tools from one grid and apply them to all the grids in the world and use AI to solve climate change.

Thank you.

Uday Khemka

Thank you so much. You can imagine if we had more time, we would have had a panel on the built environment, a panel on industrial decarbon, a panel on transportation. We don’t have the time. But thank you for that fantastic presentation. A couple of interventions. Now we turn to the last segment. We have three very distinguished institutions with us, all involved at the strategic level. with Grail and with this process. And I’ll start, Ankur, with you at McKinsey, who have been close partners.

Ankur Puri

Thanks a lot. Another race against time. So what would I like to… I should say that Sean went out of the room and negotiated five minutes more for us in the room. So… Okay. So, while the slides come up, it’s firstly a privilege to be here. And thank you for the opportunity for McKinsey to be part of the journey that you are leading, Uday. And thank you all for being here and shaping this in your own special way, at your own scale. I’m Ankur. I’m a partner based out of our India office. I lead Quantum Black in India, which is our AI team. And I work across sectors, because that’s really… a lot of fun.

And part of my work has been in energy. Part of my work has been in the built environment. Thank you. but I’m representing the team which is quite global that has had the privilege to work with the GRAIL effort. So I’d like to just talk about how this little effort with us is shaping the larger movement that GRAIL represents. So everybody’s talking about the impact of AI, so I’m not going to talk more about that, but the promise of AI, let’s just be clear. I think the way large global efforts have sort of found shape is to focus around a few challenges. So one of the big pieces that the GRAIL work has been about is shaping these four challenges and articulating them.

They’re about operational improvement in our current way of working. Big consulting words, strategic intelligence and foresight, basically better planning. Okay, build things better. Transformation, innovation. So can we do new things that don’t exist? That will help the future. And the last one is autonomous operations, which is essentially do you do current operations in a very different way? Use drones instead of people to go see how the wiring is in a large electric plant. and create more impact. Several of the examples you heard about will fit into this across energy -built environment materials and this can keep expanding food systems perhaps. Now, there’s a huge amount of work going on in just collecting the knowledge on each of those challenges.

Then you think about those fields of play, the energy -built environment. Within that, there are stakeholders. So for each stakeholder, what’s relevant? And then for each stakeholder, let’s say if you talk about system operators here as an example, there’s network planning is a domain to think about. Asset management is a domain to think about. Delivery is a domain to think about. Field force execution. Think of this as you’re now bringing in the language of the industry into this knowledge base so that if someone manages a power plant, they’ll be like, okay, what’s my library of things I need to look at? Tomorrow, that can then connect it to people who are innovating or providing these solutions.

One important gap in the middle is, okay, how valuable is it? each of these ideas when it comes to cost, when it comes to emissions. And the work’s not yet ready to be unveiled, but we are quite privileged to work with the Grail team and, of course, global experts to start to now quantify, both in terms of economic impact, but also in terms of direct emissions impact, what each of these applications could be worth. Because then our scarce resources and limited time can be focused on the most important problems. And I think that’s what’s coming up ahead, and I look forward to all of you pushing the boundary further, and it’s a privilege to be part of this.

Thank you.

Speaker 1

Okay, I will kick off. So as a metaphor for the climate, we’re drastically running out of time, and I can see a clock ticking down in front of me. So I’m Rob. I’m from University College London. 200 years ago, University College London was founded with a… a purpose to drive… change to be impactful and to create useful knowledge. That’s really important for the climate because we no longer have the ability to let knowledge sit on the shelf when it comes to climate. So in 2026 at UCL, the way that we bring our community together is through what we call the Grand Challenges. These are a self -funded, cross -university way of tackling problems that are too complex for any one discipline.

Climate crisis at UCL sits alongside challenges like mental health and well -being and data -empowered societies, and they’re found in all 11 of UCL’s faculties, from engineering to health and arts and humanities. So where does AI come into this? Well, AI at UCL is not seen as a single discipline, but as an enabling layer embedded across the entire institution. It builds on our heritage in AI. We’ve got three Nobel Prizes, we’re the birthplace of Google DeepMind, we’ve got several Nobel Prizes, we’ve got the Nobel Prize for the companies all at unicorn valuations. Four quick examples at UCL at the moment. Starting at home we use our own campus as a living lab we’ve got sensor data from across our estate that forecasts energy demand and detects unusual patterns across UCL’s buildings and we turn that into insights for practical intervention.

Second example we’ve got our spin -out Carbon Re which uses deep reinforcement learning and digital twin optimization to cut fuel use and emissions in energy intensive processes like cement production. Third example is a partnership UCL Center for sustainability and real tech innovation is created in partnership with PGM real estate it links computer science to the built environment and accelerates AI enabled sustainability in the real estate it drives impact on the environment but also value for the real estate investors. And then we’ve got our digital innovation center which uses deep reinforcement learning and digital twin optimization to cut fuel use and emissions in energy intensive processes like cement production. Third example is a partnership UCL’s center for sustainability and real tech innovation is created in partnership with PGM real estate it links computer science to the built environment and accelerates AI enabled sustainability in the real estate it drives impact on the environment but also value for the real estate investors.

And fourth UCL Grand Challenges has supported an inclusive and AI tool that transforms satellite and drone imagery into accessible web -based sea ice classification that’s being used to support safer travel for Inuit communities. Aviation is another frontier for us. It’s a grand challenge in its own right. And in there, we are looking at short -term and long -term interventions. AI is used to create short -term interventions that drive down its impact on the climate, while engineering is undertaking long -term technology transform in electrification and hydrogen propulsion. And finally, for UCL, convening really matters. In April 2025, as Uday mentioned, UCL, along with GRAIL, hosted our International Summit on AI Solutions for Climate Change, exploring sectors like energy and the built environment, and moving from discussions and pilots to deployment and impact.

I’ll finish with a quick call to action, which is that the grand challenges created by the U .S. government have been the greatest challenge for us. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past.

We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of challenges in the past. We’ve had a lot of

Adam Sobey

Cheers, and we’re properly into Alex Ferguson overtime now. So hopefully not with the climate change, so I’ll try and leave a little bit of time for Uday. So I’m Adam Sobey, I’m from the Alan Turing Institute. This is the UK’s National AI Institute. We focus on five missions across environment, which is focused on environmental forecasting and climate change, on sustainability, on defence and security, on health, and on foundational research. And as the Director for Sustainability, obviously I think that’s the most important mission, and that’s why I’m here. But we believe that the time for action is now. is literally on fire. We saw fires in the US which have been linked heavily to climate change.

We are seeing droughts in India which is affecting the food and people’s lives. We’re seeing pollution in Southeast Asia which is affecting health. We cannot wait for new fuels for the energy transition to occur. We need to do something immediately starting today. And we believe that AI can play that role. We know this because we’ve, as a part of our institute, have applied AI and data science to shipping and reduced emissions by 18%. We have done this in buildings where we’ve improved HVAC optimisation to reduce emissions by 42%. And we’ve created an underground urban farm that works entirely off renewable energies in the UK, allowing us to grow crops without using any CO2. However, I think we’ve done some relatively impressive things for a relatively small institute.

We’ve done some really impressive things for a relatively small institute. We can’t do this alone. we realised that this is a global problem and the Sustainability Missions chief funder is Lloyd’s Register Foundation which is a global charity heavily focused on the global south and so we think that it’s really important that we work together both within the UK and outside of the UK to solve these problems and that’s why we’re really pleased to be part of Braille to look for global solutions to global problems so thank you very much

Uday Khemka

It’s a tribute to all our speakers that they managed to put extraordinary quality into this ridiculously short time frame. I’ll just end on three words. One word is that we have come through our work together to find hundreds of examples of opportunities where businesses for example can save money or increase revenues, improve their economic value while at the same time massively improving their emissions profiles on the mitigation side. On the adaptation side… to your points. There are many examples from Google to all of your institutions where these technologies are already being deployed to save lives at a big scale. And the last point I’d make, apart from I’d ask you to thank our speakers with a big round of applause, is first to say that again and again you’ve heard one theme coming out of this group, which is radical collaboration.

Work with us to make the difference that we all believe and know can be made through the application of AI solutions to climate change. So maybe we could give our speakers a round of applause. Thank you very, very much.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Uday Khemka framed the discussion as an urgent, radical, action‑oriented partnership that brings together development, a sustainable planet and AI.”

The knowledge base records that Uday urges an urgent, radical, action-oriented partnership linking development and climate agendas through AI [S1].

Confirmedmedium

“Khemka warned that the panel’s brief time mirrored the narrow window for climate action.”

The discussion was described as urgent with speakers emphasizing the limited time available to address climate change, confirming the narrow-window analogy [S2].

Additional Contextlow

“The session maintained high energy and optimism about AI’s potential while acknowledging the gravity of the climate crisis.”

S2 adds that the tone was high-energy and optimistic about AI’s potential, providing additional nuance to the report’s description of the panel’s atmosphere [S2].

External Sources (91)
S1
Building Climate-Resilient Systems with AI — – Nalin Agarwal- Ankur Puri
S2
Building Climate-Resilient Systems with AI — Speakers:Nalin Agarwal, Ankur Puri Speakers:Uday Khemka, David Sandalow, Nalin Agarwal, Ankur Puri, Speaker 1
S3
https://app.faicon.ai/ai-impact-summit-2026/building-climate-resilient-systems-with-ai — And part of my work has been in energy. Part of my work has been in the built environment. Thank you. but I’m representi…
S4
Building Climate-Resilient Systems with AI — Cheers, and we’re properly into Alex Ferguson overtime now. So hopefully not with the climate change, so I’ll try and le…
S5
https://dig.watch/event/india-ai-impact-summit-2026/building-climate-resilient-systems-with-ai — Cheers, and we’re properly into Alex Ferguson overtime now. So hopefully not with the climate change, so I’ll try and le…
S6
Building Climate-Resilient Systems with AI — Cheers, and we’re properly into Alex Ferguson overtime now. So hopefully not with the climate change, so I’ll try and le…
S7
Building Climate-Resilient Systems with AI — – David Sandalow- Dan Travers- Nalin Agarwal – David Sandalow- Spencer Low – Uday Khemka- David Sandalow- Adam Sobey …
S8
Building Climate-Resilient Systems with AI — How are you possibly going to address this balancing of this grid with a bunch of people in a room, right? You need AI s…
S9
AI for Good Impact Awards — – **Dan Travers** – Representative from Open Climate Fix
S10
https://app.faicon.ai/ai-impact-summit-2026/building-climate-resilient-systems-with-ai — So myself, I came from sort of banking tech space. Jack, my co -founder, came from Google DeepMind, who the name keeps c…
S11
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S12
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S13
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S14
Building Climate-Resilient Systems with AI — 1354 words | 203 words per minute | Duration: 398 secondss Thank you very much for hosting this. I don’t know if that’s…
S15
Building Climate-Resilient Systems with AI — -Vrushali Gaud- Global Director of Climate Operations at Google, leads Google’s decarbonization, water and circularity s…
S16
The Innovation Beneath AI: The US-India Partnership powering the AI Era — -Vrushali Gaud- Global Director of Climate Operations at Google
S17
Building Climate-Resilient Systems with AI — – David Sandalow- Spencer Low – Uday Khemka- David Sandalow- Vrushali Gaud- Spencer Low- Adam Sobey- Dan Travers
S18
Building Climate-Resilient Systems with AI — no, thank you I understand we are short on time so I’ll keep very brief I’m Nalin Agarwal, one of the founding partners …
S19
Building Climate-Resilient Systems with AI — -Nalin Agarwal- Founding partner of Climate Collective, works with UNESA (utilities association), focuses on enterprise …
S20
https://dig.watch/event/india-ai-impact-summit-2026/building-climate-resilient-systems-with-ai — I don’t want to spend too much time here, but that’s what we’re going to spend some time on as well. We do a lot of work…
S21
Building Climate-Resilient Systems with AI — -Uday Khemka- Moderator/Host, involved with the Green Artificial Intelligence Learning Network (GRAIL) organization
S22
Building Climate-Resilient Systems with AI — Speakers:Uday Khemka, David Sandalow Speakers:Uday Khemka, David Sandalow, Adam Sobey Speakers:Uday Khemka, David Sand…
S23
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Atsuko Okuda:Asko. Thank you very much for giving… Thank you. First of all, I would like to thank the organizer to inv…
S24
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S25
UN warns AI poses risks without proper climate oversight — AI can help tackle the climate crisis, butgovernments must regulate itto ensure positive outcomes, says UN climate chief…
S26
Panel Discussion AI and the Creative Economy — The speaker emphasizes the need to efficiently use the limited 30-minute timeframe by jumping directly into substantive …
S27
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — – Albina Ovcearenco- Fadila Leturcq Evidence building phase needed across sectors and regions to move beyond principle-…
S28
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Climate change requires collaborative efforts through a shared space for potential solutions.
S29
High-Level sessions: Setting the Scene – Global Supply Chain Challenges and Solutions — The convocation is envisioned as a multidisciplinary forum, gathering voices from sectors ranging from insurance to fina…
S30
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — Artificial Intelligence (AI) technologies have the potential to significantly contribute to creating greener cities and …
S31
Safe and Responsible AI at Scale Practical Pathways — Guardrails, Human‑in‑the‑Loop, and Risk‑Assessment Mechanisms Are Essential for Reliable Deployment
S32
Generative AI presents the biggest data-risk challenge in history — Cybersecurity specialistswarnthat generative AI systems, such as large language models, are creating a data risk frontie…
S33
Main Session on Artificial Intelligence | IGF 2023 — Seth Center:IAEA is an imperfect analogy for the current technology and the situation we faced for multiple reasons. One…
S34
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Sustainable development | Infrastructure | Development The moderator emphasized the paradoxical nature of AI technology…
S35
Using AI to tackle our planet’s most urgent problems — Development | Infrastructure | Legal and regulatory Examples of systems processing vast amounts of data in real-time, f…
S36
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Anshul argues that AI can be a potential big equalizer, like electricity, that can change everything when properly imple…
S37
(Re)-Building Trust Online: A Call to Action | IGF 2023 Launch / Award Event #144 — The launch of the task force and its principles were seen as an opportunity to pave a strategic path forward and to coor…
S38
Climate change and Technology implementation | IGF 2023 WS #570 — Speaker:Thank you, Millennium. I’m Sakura Takahashi from Japan. I’m speaking here today on behalf of Climate Youth Japan…
S39
AI’s impact on environment — The rapid rise of AIhas raisedconcerns about its environmental impact, particularly in data centres. It is projected tha…
S40
Global AI Governance: Reimagining IGF’s Role & Impact — Paloma Lara-Castro: Thank you, Liz. Hi, everyone. Thank you for the space. I’m representing Derechos Digitales. We are a…
S41
Exploring the Intersections of Grassroots Movements — Additionally, the discussions shed light on the use of technology as a tool to combat institutional and environmental ra…
S42
The future of Digital Public Infrastructure for environmental sustainability — Yolanda Martinez:Yes, definitely. First of all, congratulations. I thoroughly agree that it’s not easy to put together t…
S43
Cooperation for a Green Digital Future | IGF 2023 — In the analysis, several key points are highlighted by different speakers. Firstly, it is underscored that a significant…
S44
Empowering the Ethical Supply Chain: steps to responsible sourcing and circular economy (Lenovo) — Collaboration, education, and awareness are identified as crucial factors in driving sustainability efforts. However, ch…
S45
Open Forum: Liberating Science — While climate advocacy is seen as necessary, it is also an exhausting undertaking that requires dedicated effort and per…
S46
AI and Data Driving India’s Energy Transformation for Climate Solutions — Creating sustainable funding and governance models for long-term maintenance of data infrastructure Establishing clear …
S47
AI and Data Driving India’s Energy Transformation for Climate Solutions — “So I’m hearing sort of ensuring coordination between departments, ensuring thinking about the data strategy.”[58]. “But…
S48
Climate change and Technology implementation | IGF 2023 WS #570 — Artificial intelligence and improved sensors can provide real-time environmental data, shaping climate research and poli…
S49
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — Major barriers include skills gaps, capacity constraints, infrastructure limitations, and need for localized datasets
S50
Building Climate-Resilient Systems with AI — “The main barriers to AI’s impact in reducing greenhouse gas emissions are a lack of data and a lack of trained personne…
S51
Building Climate-Resilient Systems with AI — Sandalow breaks down AI capabilities into four fundamental categories that directly apply to climate challenges. Each ca…
S52
Survival Tech Harnessing AI to Manage Global Climate Extremes — It is not possible to map each and every, the hill in the vulnerable areas. So this is where the complications arise. An…
S53
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Li discusses the potential of AI-driven models in climate prediction and resource mobilization. He highlights the import…
S54
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — Artificial Intelligence (AI) technologies have the potential to significantly contribute to creating greener cities and …
S55
AI for agriculture Scaling Intelegence for food and climate resiliance — This discussion focused on the integration of artificial intelligence in agriculture to enhance food security and climat…
S56
AI for agriculture Scaling Intelegence for food and climate resiliance — A very good morning to all of you. Shri Devesh Chaturvedi ji, Rajesh Agarwal ji, Vikas Rastogi ji. Mr. Jonas Jett, Srima…
S57
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — Discussion point:Food security and supply chain resilience Discussion point:Policy and governance frameworks
S58
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S59
AI climate benefits overstated says new civil society report — Environmental groups, including Beyond Fossil Fuels and Stand.earth,have publisheda report challenging claims that AI wi…
S60
AI’s growing role in environmental sustainability — AIis expandingrapidly, driving rising electricity and water consumption, which has fuelled concerns about environmental …
S61
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — Antonia Gawel:Well it’s great to be here and to see everybody in this discussion which is indeed an important one and I …
S62
AI Meets Agriculture Building Food Security and Climate Resilien — Chief Minister Devendra Fadnavis presented Maharashtra’s Maha Agri AI Policy 2025-2029, emphasizing the shift from demon…
S63
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Development | Human rights | Sustainable development Funding and Policy Mechanisms Mark Gachara emphasized that climat…
S64
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Evidence:Focus on climate resilience, agriculture, and energy as priority sectors; consideration of countries with limit…
S65
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Sustainable development | Infrastructure | Development The moderator emphasized the paradoxical nature of AI technology…
S66
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — Patrick:Thank you. I did put you a little bit on the spot there, but I will alleviate a little bit the burden of you and…
S67
Using AI to tackle our planet’s most urgent problems — Development | Infrastructure | Legal and regulatory Examples of systems processing vast amounts of data in real-time, f…
S68
Building Climate-Resilient Systems with AI — “Grail is an attempt to create… …to create a collaborative network.”[16]. “Going back into Grail, bottom left, the f…
S69
WS #283 AI Agents: Ensuring Responsible Deployment — The discussion maintained a balanced, thoughtful tone throughout, combining cautious optimism with realistic concern. Pa…
S70
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — All three industry leaders emphasized the need for collaborative, ecosystem-wide approaches rather than proprietary solu…
S71
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Unexpected consensus across telecom, research, and governance sectors on the need for collaborative ecosystem approaches…
S72
Building Climate-Resilient Systems with AI — Evidence:At COP26 in Glasgow in 2021, they concluded the likelihood of achieving 43% decarbonization from 2019 to 2030 l…
S73
Climate change and Technology implementation | IGF 2023 WS #570 — Speaker:Thank you, Millennium. I’m Sakura Takahashi from Japan. I’m speaking here today on behalf of Climate Youth Japan…
S74
The future of Digital Public Infrastructure for environmental sustainability — Yolanda Martinez:Yes, definitely. First of all, congratulations. I thoroughly agree that it’s not easy to put together t…
S75
HIGH LEVEL LEADERS SESSION IV — Artificial Intelligence is used in many fields
S76
Global AI Governance: Reimagining IGF’s Role & Impact — Paloma Lara-Castro: Thank you, Liz. Hi, everyone. Thank you for the space. I’m representing Derechos Digitales. We are a…
S77
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Pellerin Matis: I think government can really learn from the private sector because there is lots of technologies and …
S78
What policy levers can bridge the AI divide? — ## Sector-Specific Applications
S79
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — The dialogue underscored the importance of intergenerational solidarity and the need to consider the long-term impacts o…
S80
Open Forum: Liberating Science — While climate advocacy is seen as necessary, it is also an exhausting undertaking that requires dedicated effort and per…
S81
Opening of the session — Singapore: Thank you Mr. Chair on behalf of my delegation I’d like to express our thanks to you and your team for the p…
S82
Report on the Transforming Education Summit — 2022) and ‘ Futures of Education briefing notes ‘ were all prepared to support consultations. In the majority of cases …
S83
High-Level session: Building and Financing Resilient and Sustainable Global Supply chains and the Role of the Private Sector — Gender diversity is integrated into their approach beyond tokenism The manifesto promotes creating efficient and resili…
S84
Kingdom of Cambodia — CTX-2022 was an exceptional event that brought together an impressive lineup of participants from various sectors, as…
S85
High-Level Session 4: From Summit of the Future to WSIS+ 20 — Junhua Li: Thank you. Certainly, UN believes digital transformation is one of the strategic vehicles for almost all…
S86
Laying the foundations for AI governance — ### Persistent Disagreements Key tensions remained around:
S87
(Day 1) General Debate – General Assembly, 79th session: morning session — This transcript covers speeches from world leaders at the 79th United Nations General Assembly, focusing on global chall…
S88
Advancing Scientific AI with Safety Ethics and Responsibility — Thank you, Shyam. I think this is a very important question. And it’s also a topic that I’m really passionate about as w…
S89
Is AI the key to nuclear renaissance? — There is a direct correlation between the exponential increase in model parameters and the increase in the computational…
S90
Greener economies through digitalisation — Furthermore, greater stakeholder participation, particularly of Micro, Small, and Medium Enterprises (MSMEs), should be …
S91
Digital solutions for sustainability: ICT’s role in GHG reduction and biodiversity protection — Laura Cyron: It’s a small question with minor, well, very sub points. OK, so thank you very much, first of all, for havi…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
U
Uday Khemka
5 arguments167 words per minute2437 words871 seconds
Argument 1
The net climate benefit of AI far exceeds the emissions from data‑center operations, showing a positive overall balance
EXPLANATION
Uday argues that although AI technologies consume energy, the overall greenhouse‑gas reductions they enable outweigh the emissions from data‑center operations. He cites a study that quantifies both the emissions from data centres and the potential emissions avoided through AI applications.
EVIDENCE
Uday references the Grantham Institute’s quantification, which estimates that data centres add between 0.5 and 1.4 gigatons of CO₂ annually, while AI-driven solutions could remove between 3.5 and 5.4 gigatons, indicating a net positive climate impact [62-63].
MAJOR DISCUSSION POINT
AI net climate benefit
AGREED WITH
David Sandalow
Argument 2
GRAIL creates a global, cross‑sector partnership platform that links academia, industry, governments and funders to accelerate AI‑driven climate action
EXPLANATION
Uday describes GRAIL as a not‑for‑profit network that brings together universities, companies, NGOs, and governments to co‑create and scale AI solutions for climate mitigation and adaptation. The platform coordinates deal flow, funding, and collaborative projects to move ideas to implementation quickly.
EVIDENCE
He outlines GRAIL’s structure, noting its collaborative community of academic, commercial, philanthropic, and governmental institutions, the flow of ideas and deals, and its role in funding and scaling solutions through grants, venture capital, and government programs [54-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External descriptions of GRAIL as a collaborative network that brings together academia, industry, philanthropic and governmental institutions are provided in [S2], which aligns with Khemka’s characterization.
MAJOR DISCUSSION POINT
GRAIL collaborative network
AGREED WITH
Ankur Puri, Nalin Agarwal, Vrushali Gaud, Spencer Low
Argument 3
The climate‑development‑AI triple challenge demands immediate, large‑scale collaboration; the panel serves as an invitation to radical, action‑oriented partnership
EXPLANATION
Uday frames the intersection of development, climate, and AI as the most pressing challenge and calls the session an invitation for radical, collaborative action. He emphasizes that only coordinated, large‑scale efforts can meet the urgency of the problem.
EVIDENCE
He calls the session “an invitation for radical action-oriented collaboration” and stresses the importance of the triple challenge of development, climate, and AI as the most important challenge we face [27-29].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel’s framing of the climate-development-AI triple challenge as the most urgent problem is highlighted in the discussion summary that notes the urgency and multidisciplinary nature of the challenge [S1].
MAJOR DISCUSSION POINT
Urgent triple‑challenge collaboration
AGREED WITH
Ankur Puri, Nalin Agarwal, Vrushali Gaud, Spencer Low
Argument 4
Time is limited; the panel stresses the need to move from discussion to deployment of AI solutions across sectors
EXPLANATION
Uday highlights the scarcity of time both for the panel and for climate action, urging participants to shift from talking to implementing AI‑driven solutions. He uses the metaphor of a short panel to illustrate the urgency.
EVIDENCE
He notes that the panel has “very little time” and likens it to the limited time we have to act on climate change, calling for “action mode” and a move away from mere discussion [17-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The limited time and action-oriented tone of the session are emphasized in the panel summary that describes the discussion as urgent and focused on moving quickly to implementation [S1] and in the guidance to prioritize substantive questions over background material [S26].
MAJOR DISCUSSION POINT
Limited time for action
AGREED WITH
Ankur Puri, Nalin Agarwal, Vrushali Gaud, Spencer Low
Argument 5
Participants are urged to join the collaborative platforms, co‑create solutions and scale them quickly to meet climate targets
EXPLANATION
Uday invites attendees to engage with the online collaborative platform created after the summit, to work with governments, and to contribute to taxonomies that drive massive AI‑enabled climate action. He stresses rapid scaling of solutions.
EVIDENCE
He mentions three post-summit actions: an online collaborative platform for co-creation, engagement with governments, and development of sector-specific taxonomies to drive large-scale AI climate solutions [73-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Post-summit actions, including an online collaborative platform for co-creation, government engagement, and sector-specific taxonomies, are outlined as part of GRAIL’s rollout in [S2].
MAJOR DISCUSSION POINT
Call to join collaborative platforms
AGREED WITH
David Sandalow, Dan Travers, Nalin Agarwal
D
David Sandalow
4 arguments189 words per minute2021 words639 seconds
Argument 1
AI can deliver both incremental efficiency gains and transformational breakthroughs that significantly cut greenhouse‑gas emissions
EXPLANATION
David asserts that AI offers both modest efficiency improvements and major technological breakthroughs that can substantially reduce emissions. He categorises AI impacts into incremental and transformational gains across sectors.
EVIDENCE
He states that AI has “significant potential to contribute to reductions in greenhouse gas emissions” and distinguishes between incremental gains such as efficiency improvements and transformational gains like new materials and technologies [146-154].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sandalow’s categorisation of AI impacts into incremental improvements and transformational breakthroughs is documented in the panel notes that describe these two categories of climate impact [S2].
MAJOR DISCUSSION POINT
AI potential for emission cuts
AGREED WITH
Dan Travers, Nalin Agarwal
Argument 2
AI’s core capabilities—detecting patterns, predicting outcomes, optimizing processes, and simulating scenarios—are directly applicable to climate solutions
EXPLANATION
David outlines four fundamental AI functions—pattern detection, prediction, optimization, and simulation—and explains how each can be leveraged to address climate challenges, from methane detection to battery chemistry modeling.
EVIDENCE
He describes AI’s abilities to detect patterns (e.g., methane emissions from satellite data) [170-173], predict weather for solar and wind farms [184], optimize power flows [185-186], and simulate battery chemistry [186-188] as core tools for climate mitigation and adaptation [170-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The four fundamental AI capabilities and their climate applications (pattern detection, prediction, optimisation, simulation) are detailed in the discussion summary [S2].
MAJOR DISCUSSION POINT
Core AI capabilities for climate
AGREED WITH
Dan Travers, Spencer Low, Vrushali Gaud
Argument 3
Major obstacles include insufficient high‑quality data, a shortage of trained AI‑climate specialists, and a lack of trust in algorithmic outputs
EXPLANATION
David identifies three key barriers to AI’s climate impact: limited access to high‑quality data, a deficit of personnel skilled at both AI and climate science, and low trust in AI outputs among potential users.
EVIDENCE
He notes that “the main barriers to AI’s impact … are a lack of data and a lack of trained personnel” and that “trust is essential” for organizations to adopt AI solutions [158-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Barriers such as lack of standardised data, limited skilled personnel, and trust issues are highlighted in the evidence-building and scaling challenges noted in the panel summary [S1].
MAJOR DISCUSSION POINT
Barriers to AI deployment
AGREED WITH
Vrushali Gaud, Spencer Low
Argument 4
Standardised data sets, skilled personnel and careful governance of generative‑AI risks are required for safe, large‑scale deployment
EXPLANATION
David stresses the need for standardized, high‑quality datasets, a trained workforce, and robust governance—especially around generative AI—to ensure safe and effective scaling of AI climate tools.
EVIDENCE
He emphasizes the need for “standardised data” and “trained personnel” and warns that “real-time AI operations can cause security and safety risks,” highlighting the importance of governing generative AI use [201-208].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidance on safe AI deployment, including standardised data, trained workforce, and governance of generative-AI risks, is provided in the responsible-AI frameworks and sandbox discussions [S31], [S32], and [S28].
MAJOR DISCUSSION POINT
Enablers and risk governance
A
Adam Sobey
1 argument162 words per minute346 words127 seconds
Argument 1
Real‑world AI projects at the Alan Turing Institute have already reduced emissions in shipping, building HVAC, and urban farming
EXPLANATION
Adam reports concrete emissions‑reduction outcomes from AI applications developed at the Alan Turing Institute, demonstrating the technology’s immediate impact across transport, buildings, and food production.
EVIDENCE
He cites a shipping AI project that cut emissions by 18% [527], an HVAC optimisation effort that reduced building emissions by 42% [528], and an underground urban farm powered entirely by renewables that eliminates CO₂ emissions from food production [529-530].
MAJOR DISCUSSION POINT
AI‑driven emissions reductions
A
Ankur Puri
1 argument168 words per minute618 words219 seconds
Argument 1
McKinsey is quantifying the economic and emissions impact of AI use‑cases to prioritize high‑value interventions
EXPLANATION
Ankur explains that McKinsey is working with the GRAIL effort to assess both the cost‑benefit and emissions impact of AI applications, enabling resources to be focused on the most valuable climate solutions.
EVIDENCE
He states that McKinsey is “quantifying … both in terms of economic impact, but also in terms of direct emissions impact, what each of these applications could be worth” to guide scarce resources toward the most important problems [464-467].
MAJOR DISCUSSION POINT
Quantifying AI impact
N
Nalin Agarwal
2 arguments162 words per minute496 words182 seconds
Argument 1
The Climate Collective’s AI‑for‑power program connects startups with utilities across the Global South, delivering pilots and building an open‑innovation platform
EXPLANATION
Nalin outlines the Climate Collective’s initiative that matches AI‑focused startups with utility partners, runs pilots, and creates an open‑innovation platform to accelerate grid decarbonisation in emerging economies.
EVIDENCE
He describes a six-year program supporting 1,500 startups, partnering with 22 utilities, generating about 20 pilots (30% conversion), and building an AI-for-power platform comprising an open-innovation program, knowledge hub, and solution database [364-390].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nalin Agarwal’s role as founding partner of the Climate Collective and the focus on AI-for-power collaborations are mentioned in the panel participant list and overview [S1].
MAJOR DISCUSSION POINT
AI‑for‑power open‑innovation
AGREED WITH
Uday Khemka, Ankur Puri, Vrushali Gaud, Spencer Low
Argument 2
Participants are urged to join the collaborative platforms, co‑create solutions and scale them quickly to meet climate targets
EXPLANATION
Nalin echoes the call for rapid co‑creation and scaling of AI climate solutions, urging stakeholders to engage with the Climate Collective’s platforms and partnerships to accelerate impact.
EVIDENCE
He references the same post-summit actions-online collaborative platform, government engagement, and sector taxonomies-that were highlighted earlier, emphasizing the need for swift collective action [73-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Post-summit actions, including an online collaborative platform for co-creation, government engagement, and sector-specific taxonomies, are outlined as part of GRAIL’s rollout in [S2].
MAJOR DISCUSSION POINT
Call to join collaborative platforms
AGREED WITH
David Sandalow, Dan Travers
V
Vrushali Gaud
3 arguments203 words per minute1354 words398 seconds
Argument 1
Google’s Climate Tech Center in India and its open‑source data initiatives (Earth AI, Flood Hub) aim to democratize climate data and build green‑skill capacity
EXPLANATION
Vrushali explains that Google is establishing a Climate Tech Center in India to foster actionable climate research, while open‑source projects like Earth AI and Flood Hub provide free satellite and weather data to support innovation and skill‑building.
EVIDENCE
She notes the launch of a Google Center of Climate Tech in partnership with the Indian government’s Principal Scientific Advisory, the Earth AI dataset of satellite and weather information, and the Flood Hub platform that supplies flood-risk maps for communities and businesses [298-304].
MAJOR DISCUSSION POINT
Open‑source climate data and skills
Argument 2
Google applies AI to improve data‑center energy use, water‑leak detection, and to provide flood‑risk mapping for communities
EXPLANATION
Vrushali outlines how AI is used internally at Google to reduce emissions from its infrastructure and externally to support community resilience through flood‑risk information.
EVIDENCE
She cites AI-driven optimisation of data-center electricity consumption, detection of water-tap leakages, and the Flood Hub platform that supplies flood-risk maps to insurers, real-estate firms, and other stakeholders [284-289][290-295].
MAJOR DISCUSSION POINT
Operational AI for climate impact
AGREED WITH
David Sandalow, Dan Travers, Spencer Low
Argument 3
Building “green skills” and embedding climate‑first thinking across all domains are essential for lasting impact
EXPLANATION
Vrushali stresses that beyond technology, developing green competencies and a climate‑first mindset throughout organisations and societies is crucial to achieve sustainable outcomes.
EVIDENCE
She mentions the goal of the Climate Tech Center to “encourage academic research that is actionable” and to develop green skills, especially in tier-two Indian cities, as a lever across sectors [298-305].
MAJOR DISCUSSION POINT
Green skills development
S
Spencer Low
2 arguments163 words per minute638 words234 seconds
Argument 1
AI models that delineate smallholder farm boundaries and identify crops enable precision agriculture and climate‑resilient advisory services
EXPLANATION
Spencer describes AI techniques that map individual farm plots and classify crops, providing granular data that can be used for targeted advice, mitigation, and adaptation in smallholder agriculture.
EVIDENCE
He explains that Google’s AI can detect field boundaries, distinguish crop types via multispectral imagery, and identify events such as tillage or harvest, feeding this data into India’s Krishi DSS and state-level systems to guide farmers [318-332].
MAJOR DISCUSSION POINT
AI for smallholder agriculture
AGREED WITH
David Sandalow, Dan Travers, Vrushali Gaud
Argument 2
Open‑source tools and publicly available satellite imagery are critical to lower entry barriers for innovators and startups
EXPLANATION
Spencer argues that making AI tools and satellite data openly accessible enables startups and NGOs to develop climate solutions without prohibitive costs, fostering broader innovation.
EVIDENCE
He points to the availability of AI-enhanced agricultural landscape data, field-boundary models, and crop-type classifiers as part of a public digital infrastructure that supports NGOs, governments, and startups like Carbon Farm and Wadwani AI [322-329].
MAJOR DISCUSSION POINT
Open‑source data for innovation
AGREED WITH
David Sandalow, Vrushali Gaud
D
Dan Travers
1 argument192 words per minute614 words191 seconds
Argument 1
AI‑enabled grid forecasting, optimization and real‑time control are essential to integrate variable renewables and avoid costly blackouts
EXPLANATION
Dan emphasizes that modern electricity grids, with millions of distributed generators and variable demand, require AI for forecasting, optimal power flow, and real‑time control to maintain reliability and keep costs low.
EVIDENCE
He notes that AI is needed to “schedule and marshal all of these assets at AI speed,” to prevent blackouts and expensive backup generation, and to manage the new variability from solar, wind, data centres, EVs, and batteries [400-408].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of AI for grid reliability, including forecasting, handling power fluctuations, and predicting demand to support greener cities, is discussed in the networking session notes on AI for energy systems [S30].
MAJOR DISCUSSION POINT
AI for grid reliability
S
Speaker 1
1 argument237 words per minute778 words196 seconds
Argument 1
UCL leverages AI across campus energy management, cement‑process optimisation, real‑estate sustainability and sea‑ice monitoring, illustrating interdisciplinary impact
EXPLANATION
Speaker 1 showcases how University College London integrates AI throughout its institution, from campus energy forecasting to industrial process optimisation and environmental monitoring, demonstrating AI’s cross‑disciplinary climate relevance.
EVIDENCE
He lists examples such as campus sensor data for energy demand forecasting, the Carbon Re spin-out using deep reinforcement learning for cement emissions, a partnership with PGM Real Estate for AI-enabled real-estate sustainability, and a sea-ice classification tool that aids Inuit communities [480-486][486-492].
MAJOR DISCUSSION POINT
UCL interdisciplinary AI climate work
Agreements
Agreement Points
AI’s net climate benefit outweighs the emissions from data‑center operations
Speakers: Uday Khemka, David Sandalow
The net climate benefit of AI far exceeds the emissions from data‑center operations, showing a positive overall balance AI does have significant potential to contribute to reductions in greenhouse gas emissions
Both speakers argue that, although AI technologies consume energy, the greenhouse-gas reductions enabled by AI are larger than the emissions from the data centres that run AI models, resulting in a net positive climate impact [62-63][146-156].
POLICY CONTEXT (KNOWLEDGE BASE)
The claim sits within a broader policy debate: Green-AI reports stress the need to balance AI’s carbon footprint with its climate gains, while civil-society analyses argue the benefits are overstated, informing emerging sustainable-AI guidelines [S58][S59][S60][S61].
Urgent, large‑scale, cross‑sector collaboration is essential to accelerate AI‑driven climate action
Speakers: Uday Khemka, Ankur Puri, Nalin Agarwal, Vrushali Gaud, Spencer Low
GRAIL creates a global, cross‑sector partnership platform that links academia, industry, governments and funders to accelerate AI‑driven climate action The climate‑development‑AI triple challenge demands immediate, large‑scale collaboration; the panel serves as an invitation to radical, action‑oriented partnership Time is limited; the panel stresses the need to move from discussion to deployment of AI solutions across sectors Participants are urged to join the collaborative platforms, co‑create solutions and scale them quickly to meet climate targets The Climate Collective’s AI‑for‑power program connects startups with utilities across the Global South, delivering pilots and building an open‑innovation platform Google’s Climate Tech Center in India and its open‑source data initiatives (Earth AI, Flood Hub) aim to democratise climate data and build green‑skill capacity Open‑source tools and publicly available satellite imagery are critical to lower entry barriers for innovators and startups
All these speakers emphasize that the climate-development-AI challenge can only be met through rapid, coordinated action across academia, industry, governments and civil society, using platforms such as GRAIL, the Climate Collective, and open-source data initiatives to co-create and scale solutions [27-29][430-447][364-390][298-304][322-329].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with Indian policy discussions that call for coordinated data strategies, standards, and incentives across ministries to sustain AI tools for climate, as highlighted in data-access and governance forums [S46][S47][S63].
Core AI capabilities – pattern detection, prediction, optimisation and simulation – are directly applicable to climate mitigation and adaptation
Speakers: David Sandalow, Dan Travers, Spencer Low, Vrushali Gaud
AI’s core capabilities—detecting patterns, predicting outcomes, optimizing processes, and simulating scenarios—are directly applicable to climate solutions AI‑enabled grid forecasting, optimisation and real‑time control are essential to integrate variable renewables and avoid costly blackouts AI models that delineate smallholder farm boundaries and identify crops enable precision agriculture and climate‑resilient advisory services Google applies AI to improve data‑center energy use, water‑leak detection, and to provide flood‑risk mapping for communities
The speakers concur that the four fundamental AI functions (detect, predict, optimise, simulate) underpin a wide range of climate applications-from methane detection and weather forecasting to grid management, precision agriculture and infrastructure resilience [170-188][400-408][318-332][284-289].
POLICY CONTEXT (KNOWLEDGE BASE)
Expert frameworks break AI functions into these four categories and map them to climate challenges, a view echoed in IGF sessions on AI-driven climate prediction and optimisation [S51][S53][S54].
Data availability and skilled personnel are the main barriers; open data and capacity‑building are needed to unlock AI’s climate potential
Speakers: David Sandalow, Vrushali Gaud, Spencer Low
Major obstacles include insufficient high‑quality data, a shortage of trained AI‑climate specialists, and a lack of trust in algorithmic outputs Google’s Climate Tech Center and open‑source initiatives aim to democratise climate data and build green‑skill capacity Open‑source tools and publicly available satellite imagery are critical to lower entry barriers for innovators and startups
All three speakers highlight that limited, standardised datasets and a shortage of AI-climate expertise hinder progress, and they stress the importance of open-source data platforms and capacity-development programmes to overcome these challenges [158-162][298-304][322-329].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple reports identify lack of standardized, open datasets and a skills gap as key obstacles, and call for policies that improve data accessibility and invest in capacity-building [S46][S49][S50].
AI can deliver both incremental efficiency gains and transformational breakthroughs for emissions reductions
Speakers: David Sandalow, Dan Travers, Nalin Agarwal
AI can deliver both incremental efficiency gains and transformational breakthroughs that significantly cut greenhouse‑gas emissions AI‑enabled grid forecasting, optimisation and real‑time control are essential to integrate variable renewables and avoid costly blackouts Participants are urged to join the collaborative platforms, co‑create solutions and scale them quickly to meet climate targets
The speakers agree that AI offers a spectrum of impact-from modest efficiency improvements (e.g., HVAC optimisation) to disruptive innovations (e.g., new materials, grid-scale AI tools)-and that scaling these solutions through collaborative platforms is crucial [146-154][400-408][73-78].
POLICY CONTEXT (KNOWLEDGE BASE)
Case studies show AI improving energy efficiency, demand forecasting, and resource optimisation, illustrating both modest gains and potential transformative impacts on emissions [S54][S58][S60].
Similar Viewpoints
All five speakers stress that coordinated, cross‑sector networks and open‑access platforms are the fastest route to scale AI‑driven climate solutions, urging participants to engage with these collaborative ecosystems immediately [27-29][430-447][364-390][298-304][322-329].
Speakers: Uday Khemka, Ankur Puri, Nalin Agarwal, Vrushali Gaud, Spencer Low
GRAIL creates a global, cross‑sector partnership platform that links academia, industry, governments and funders to accelerate AI‑driven climate action The climate‑development‑AI triple challenge demands immediate, large‑scale collaboration; the panel serves as an invitation to radical, action‑oriented partnership Time is limited; the panel stresses the need to move from discussion to deployment of AI solutions across sectors Participants are urged to join the collaborative platforms, co‑create solutions and scale them quickly to meet climate targets The Climate Collective’s AI‑for‑power program connects startups with utilities across the Global South, delivering pilots and building an open‑innovation platform Google’s Climate Tech Center in India and its open‑source data initiatives (Earth AI, Flood Hub) aim to democratise climate data and build green‑skill capacity Open‑source tools and publicly available satellite imagery are critical to lower entry barriers for innovators and startups
These speakers converge on the idea that the four fundamental AI functions underpin practical climate applications across energy, agriculture and infrastructure sectors [170-188][400-408][318-332][284-289].
Speakers: David Sandalow, Dan Travers, Spencer Low, Vrushali Gaud
AI’s core capabilities—detecting patterns, predicting outcomes, optimizing processes, and simulating scenarios—are directly applicable to climate solutions AI‑enabled grid forecasting, optimisation and real‑time control are essential to integrate variable renewables and avoid costly blackouts AI models that delineate smallholder farm boundaries and identify crops enable precision agriculture and climate‑resilient advisory services Google applies AI to improve data‑center energy use, water‑leak detection, and to provide flood‑risk mapping for communities
Unexpected Consensus
AI’s role in food and agriculture systems
Speakers: David Sandalow, Spencer Low, Vrushali Gaud
AI can do a lot to improve both mitigation and resilience in the food system AI models that delineate smallholder farm boundaries and identify crops enable precision agriculture and climate‑resilient advisory services Google applies AI to improve data‑center energy use, water‑leak detection, and to provide flood‑risk mapping for communities
While each speaker approached the topic from different angles (policy, agricultural mapping, and operational resilience), they all highlighted AI as a key lever for enhancing food system sustainability and climate resilience, a convergence not explicitly foregrounded in the agenda but emerging across the discussion [209-212][318-332][284-289].
POLICY CONTEXT (KNOWLEDGE BASE)
Several agriculture-focused AI initiatives-including the India AI Impact Summit, Maharashtra’s AI-for-agriculture policy, and international dialogues on AI for food security-highlight the sector as a climate priority [S55][S57][S62][S64].
Overall Assessment

The panel shows strong consensus that AI can deliver a net positive climate impact, that its core technical capabilities are directly applicable across sectors, and that rapid, cross‑sector collaboration—through platforms like GRAIL, the Climate Collective and open‑source data initiatives—is essential to scale solutions. Shared concerns about data scarcity, skills gaps and governance reinforce the call for capacity‑building and standardised data frameworks.

High consensus: most speakers align on the urgency of collaboration, the dual nature of AI impact (incremental and transformational), and the need to address data and talent barriers. This unified stance suggests a solid foundation for coordinated policy and investment actions to accelerate AI‑enabled climate mitigation and adaptation.

Differences
Different Viewpoints
Openness of AI tools and data for climate solutions
Speakers: Dan Travers, Vrushali Gaud
Dan Travers argues that AI tools should be open-source and non-profit to ensure transferability across grids and rapid scaling [416-420]. Vrushali Gaud describes Google’s AI applications as largely internal (optimising data-centre energy use, water-leak detection) while offering some open-source data initiatives (Earth AI, Flood Hub) but does not commit to fully open-source tools [284-289][290-295][298-304].
Dan promotes a fully open-source model for AI climate tools, whereas Google emphasizes internal optimisation and selective open data, indicating a divergence on how openly AI solutions should be shared and deployed [416-420][284-289][290-295][298-304].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates centre on establishing clear data-access policies that balance openness with safeguards for critical infrastructure, and on skill gaps that hinder open-source AI adoption [S46][S49][S63].
Prioritisation of sectors for immediate AI‑driven climate action
Speakers: Uday Khemka, Spencer Low, Dan Travers, Ankur Puri
Uday calls for rapid, cross-sector collaboration covering power, built environment, materials, carbon markets, etc., urging participants to join the GRAIL platform and scale solutions quickly [54-59][70-78]. Spencer focuses on agriculture and smallholder farms, highlighting AI for field-boundary mapping and crop classification as a priority for climate-resilient food systems [311-332]. Dan stresses the grid as the critical bottleneck, arguing AI-enabled forecasting, optimisation and real-time control are essential to integrate variable renewables and avoid blackouts [400-408]. Ankur emphasises quantifying economic and emissions impact of AI use-cases to allocate scarce resources to the highest-value interventions, without committing to a specific sector [464-467].
All speakers agree AI is vital for climate mitigation, but they diverge on which sector should receive immediate focus-Uday advocates a broad, multi-sector approach, Spencer prioritises agriculture, Dan prioritises grid reliability, and Ankur stresses data-driven prioritisation rather than sectoral preference [54-59][70-78][311-332][400-408][464-467].
POLICY CONTEXT (KNOWLEDGE BASE)
Bilateral programs and policy briefs identify climate-resilient agriculture, energy, and industry as priority sectors for AI deployment, informing discussions on sector selection [S64][S62][S55].
Unexpected Differences
Degree of openness in AI climate solutions
Speakers: Dan Travers, Vrushali Gaud
Dan’s stance that AI tools should be fully open-source and non-profit for maximal transferability [416-420]. Google’s approach of leveraging internal AI optimisation while providing selective open-source datasets, without a blanket open-source commitment for its tools [284-289][290-295][298-304].
While both parties aim to accelerate climate action, the contrast between a fully open-source philosophy and a more proprietary, internal-focused model was not anticipated given the overall collaborative tone of the session, revealing a hidden tension over data and tool accessibility [416-420][284-289][290-295][298-304].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy dialogues stress the need for governance models that protect security while enabling shared AI tools, reflecting calls for a balanced approach to openness in climate AI solutions [S46][S49][S63].
Overall Assessment

The panel displayed strong consensus on the urgency of the climate‑development‑AI challenge and the need for collaborative action, but disagreements emerged around the preferred sectoral focus and the openness of AI tools and data. These divergences reflect differing institutional priorities—broad multi‑sector platforms (Uday), sector‑specific pilots (Spencer, Dan), impact‑driven resource allocation (Ankur), and varying openness strategies (Dan vs Google).

Moderate: While there is no outright conflict on the overarching goal, the differing strategic preferences could affect coordination and speed of implementation. Aligning on a shared roadmap that balances sectoral priorities and openness policies will be crucial to translate the collective enthusiasm into concrete climate outcomes.

Partial Agreements
All participants concur that AI is essential for climate mitigation and that collaborative, multi‑stakeholder platforms are needed, but they differ on the primary mechanism to achieve this—Uday’s GRAIL network, Google’s internal and open‑data initiatives, the Climate Collective’s startup‑utility model, open‑source tools, or McKinsey’s impact quantification—reflecting varied pathways to the same overarching goal [27-29][54-68][146-154][158-162][527-530][364-390][284-289][298-304][311-332][416-420][464-467].
Speakers: Uday Khemka, David Sandalow, Adam Sobey, Nalin Agarwal, Vrushali Gaud, Spencer Low, Dan Travers, Ankur Puri
Uday frames the climate-development-AI triple challenge as an urgent call for radical, collaborative action and promotes the GRAIL network as the vehicle for rapid co-creation [27-29][54-68]. David highlights AI’s potential (incremental and transformational) and stresses the need for data, trained personnel and trust to realise that potential [146-154][158-162]. Adam reports concrete AI-driven emissions reductions in shipping, HVAC and urban farming, underscoring immediate impact [527-530]. Nalin describes the Climate Collective’s open-innovation platform that connects startups with utilities to pilot AI-for-power solutions in the Global South [364-390]. Vrushali outlines Google’s internal AI optimisation and open-source data initiatives (Earth AI, Flood Hub) to democratise climate data and build green skills [284-289][290-295][298-304]. Spencer emphasizes open-source agricultural data and tools to empower NGOs and startups for smallholder farms [311-332][322-329]. Dan stresses open-source, non-profit AI tools for grid reliability and transferability across regions [416-420]. Ankur discusses quantifying economic and emissions impact of AI use-cases to focus scarce resources on high-value interventions [464-467].
Takeaways
Key takeaways
AI can deliver both incremental efficiency improvements and transformational breakthroughs that can substantially reduce greenhouse‑gas emissions across multiple sectors. The net climate benefit of AI far outweighs the emissions from AI infrastructure (data‑centres), making AI a net positive tool for climate action. Core AI capabilities—pattern detection, prediction, optimisation, and simulation—are directly applicable to climate mitigation and adaptation (e.g., methane detection, renewable‑grid forecasting, material discovery, extreme‑weather response). Collaborative, cross‑sector networks such as GRAIL, the Climate Collective, and open‑source initiatives are essential to scale AI‑driven climate solutions quickly. Sector‑specific pilots demonstrate impact: grid‑level forecasting and optimisation (Open Climate Fix), precision agriculture for smallholder farms (Google/Spencer Low), campus energy management and cement‑process optimisation (UCL), emissions reductions in shipping, HVAC and urban farming (Alan Turing Institute). Key barriers to wider AI deployment are lack of high‑quality data, shortage of AI‑climate talent, and trust/governance concerns, especially around generative AI in real‑time operations. Urgent, radical collaboration is required; the panel serves as an invitation to move from discussion to deployment at scale.
Resolutions and action items
Invite all participants to join the GRAIL online collaborative platform to co‑create and scale AI‑climate solutions. Continue engagement with governments worldwide to embed AI‑climate initiatives into policy frameworks (as outlined by GRAIL). Scale the Climate Collective’s AI‑for‑Power open‑innovation program, including its three‑component platform (open‑innovation pipeline, knowledge hub, solution database). McKinsey to finalize quantification of economic and emissions impact for identified AI use‑cases to prioritize high‑value interventions. Google to operationalise its Climate Tech Center in India, focusing on non‑electricity decarbonisation (low‑carbon steel, sustainable aviation fuel) and green‑skill capacity building. Open Climate Fix to deploy its proven solar‑forecasting model to the Indian grid and expand open‑source tools for other regions. Make publicly available the datasets and tools referenced (Earth AI, Flood Hub, Krishi DSS) to enable startups and NGOs to build climate‑resilient services. Encourage each organisation to establish dedicated AI‑climate teams as a standard practice.
Unresolved issues
How to secure standardized, high‑quality climate data for the Global South and integrate disparate data sources into usable AI models. Developing a scalable pipeline for training and retaining AI‑climate specialists across academia, industry, and government. Establishing robust governance frameworks for the safe use of generative AI in real‑time grid and infrastructure operations. Funding mechanisms and long‑term financing for large‑scale pilots and subsequent commercial deployment. Detailed sector‑specific roadmaps for built environment, industrial decarbonisation, and transportation, which were omitted due to time constraints. Mechanisms for measuring and verifying the actual emissions reductions achieved by AI‑driven pilots at scale.
Suggested compromises
Accepting a limited discussion format (switch‑eroo, rapid introductions) to maximise the number of contributors despite time pressure. Balancing the focus between AI’s own emissions (data‑centre GHG) and its larger climate mitigation potential, acknowledging both concerns. Prioritising both mitigation (efficiency, emissions cuts) and adaptation (flood mapping, resilient agriculture) within the same collaborative agenda. Emphasising incremental gains where quick wins are possible while simultaneously pursuing longer‑term transformational research.
Thought Provoking Comments
We talked to a whole bunch of people in the AI community, a whole bunch of people in the industrial and power, automotive sectors… are you talking to each other? Shockingly, people were not talking to each other.
This observation exposed a critical silo‑problem that prevents AI innovations from reaching the sectors that need them most, highlighting a systemic barrier rather than a technical one.
It shifted the conversation from describing AI capabilities to diagnosing why those capabilities have not been deployed at scale, prompting later speakers (e.g., Vrushali, Spencer, Dan) to discuss concrete platforms and collaborations that bridge those gaps.
Speaker: Uday Khemka
The Grantham Institute quantified 0.5‑1.4 Gt of extra GHG from data centers, but AI could potentially remove 3.5‑5.4 Gt of emissions – a clear net positive balance.
Provides a data‑driven counter‑argument to the common criticism that AI’s energy use outweighs its benefits, grounding the debate in quantitative evidence.
Set a factual baseline that allowed David Sandalow and others to discuss AI’s net impact without being sidetracked by concerns over data‑center emissions, keeping the focus on scaling solutions.
Speaker: Uday Khemka
AI does have significant potential to contribute to reductions in greenhouse gas emissions… less than 1 % of current emissions come from AI itself.
Reinforces the earlier point with an independent source, while also framing AI’s contribution as a small fraction of the problem, which alleviates fears about AI’s carbon footprint.
Validated Uday’s earlier claim, encouraging the panel to move quickly toward actionable ideas rather than debating AI’s environmental cost.
Speaker: David Sandalow
We broke down AI capabilities into four basic categories: detect, predict, optimize, and simulate.
Offers a clear, memorable framework that translates abstract AI concepts into concrete climate‑action levers, making the technology accessible to non‑technical stakeholders.
Provided a structural lens that other speakers referenced (e.g., Vrushali’s discussion of optimization, Dan’s grid scheduling), aligning diverse examples under a common taxonomy.
Speaker: David Sandalow
The main barriers to AI’s impact are lack of data, lack of trained personnel, and trust – organizations won’t use AI unless they trust it.
Identifies non‑technical, systemic obstacles that are often overlooked, shifting the dialogue toward capacity‑building and governance rather than pure technology.
Prompted Vrushali to talk about Google’s role in democratizing data and building trustworthy tools, and led Ankur to mention quantifying value to prioritize investments.
Speaker: David Sandalow
Google is now a full‑stack company: we run the data‑center infrastructure, we aim for carbon‑free energy, and we use AI to optimize everything from water leaks to grid operations.
Expands the notion of corporate climate responsibility beyond offsets to systemic operational changes, illustrating how a tech giant can embed sustainability across its entire value chain.
Moved the conversation from abstract AI potential to concrete corporate practices, inspiring other panelists to discuss how their organizations can adopt similar full‑stack approaches.
Speaker: Vrushali Gaud
We’ve trained AI to digitally enhance field boundaries, identify crops, and detect events like sowing or harvest – data now feeds into India’s Krishi DSS and supports NGOs, governments, and startups.
Highlights a tangible, scalable AI application that directly benefits smallholder farmers, linking climate mitigation with livelihood improvement in the Global South.
Introduced agriculture as a critical sector (previously under‑represented), prompting a shift toward discussing food‑system resilience and the need for digital public goods.
Speaker: Spencer Low
The grid is now a massive, highly variable system with millions of distributed generators; we need AI‑driven, real‑time scheduling or we face blackouts and soaring costs.
Frames grid modernization as an urgent, concrete problem where AI is not optional but essential, and ties technical challenges to social and political risks (public backlash).
Steered the dialogue toward power‑system specifics, reinforcing the earlier “four AI capabilities” framework and leading Ankur to discuss quantifying economic and emissions impact.
Speaker: Dan Travers
We are shaping four challenges – operational improvement, strategic intelligence, transformation, and autonomous operations – and we are beginning to quantify both economic and emissions impact to focus scarce resources on the most important problems.
Moves the conversation from idea generation to prioritization and measurement, introducing a disciplined, impact‑oriented methodology for scaling AI‑climate solutions.
Provided a roadmap for moving from pilots to large‑scale deployment, influencing the closing remarks that emphasized “radical collaboration” and concrete next steps.
Speaker: Ankur Puri
AI reduced emissions by 18 % in shipping, 42 % in HVAC, and enabled an underground urban farm powered entirely by renewables – we can’t do this alone, we need global south partnerships.
Offers concrete success metrics that demonstrate AI’s immediate climate benefits, while stressing the necessity of inclusive, cross‑regional collaboration.
Reinforced the panel’s central theme of collaboration, and broadened the geographic scope, prompting acknowledgment from Uday and tying back to the earlier call for global partnership.
Speaker: Adam Sobey
Overall Assessment

The discussion was driven forward by a series of insight‑rich interventions that moved the panel from a high‑level framing of the AI‑climate nexus to concrete, actionable pathways. Early remarks about siloed communication and quantitative net‑benefit calculations created a problem‑definition foundation. David Sandalow’s four‑pillared AI framework and identification of data, talent, and trust gaps supplied a shared vocabulary and highlighted systemic barriers. Corporate and sector‑specific examples from Google, agriculture, and grid operators then illustrated how those barriers can be overcome in practice, while Ankur’s emphasis on quantifying impact introduced a disciplined prioritization step. Together, these comments shifted the tone from abstract optimism to focused, collaborative problem‑solving, culminating in a clear call for coordinated, measurable action across academia, industry, and the Global South.

Follow-up Questions
Where should data centers be sited to minimize community and infrastructure impact?
Determining optimal locations for data centers is crucial to reduce additional emissions and ensure positive social and environmental outcomes.
Speaker: Vrushali Gaud
How can we democratize data, encourage innovation, and scale AI solutions quickly?
Rapid, inclusive scaling of AI tools is needed to meet climate targets; understanding mechanisms for open data and fast innovation pipelines is essential.
Speaker: Vrushali Gaud
How can we embed green skills across all domains, especially in tier‑two cities in India?
Building a workforce with climate‑first thinking in emerging urban areas will sustain long‑term AI‑driven climate action.
Speaker: Vrushali Gaud
How can we address the lack of data, particularly in the Global South, for AI climate applications?
Data scarcity limits model training and deployment; research is needed to create digital public infrastructure and data sharing frameworks.
Speaker: David Sandalow
How can we develop trained personnel and build AI expertise for climate work?
A shortage of skilled staff hampers AI adoption; capacity‑building programs and curricula are required.
Speaker: David Sandalow
How can we establish trust and explainability in AI models used for climate mitigation?
Stakeholder confidence is essential for AI uptake; research into transparent, auditable AI methods is needed.
Speaker: David Sandalow
How can we standardize data for power‑sector AI tools such as dynamic line rating and optimal power flow?
Standardized, high‑quality data is a prerequisite for effective AI optimization in electricity grids.
Speaker: David Sandalow
What are the security and safety risks of real‑time AI deployment in grid operations, and how can they be mitigated?
Ensuring that AI does not introduce new vulnerabilities is vital for reliable, safe grid management.
Speaker: David Sandalow
How can AI be used to map smallholder farm boundaries and identify crops at scale?
Accurate, scalable farm mapping enables targeted climate‑smart agriculture interventions for the majority of global farmers.
Speaker: Spencer Low
How can we evaluate the economic and emissions impact of AI solutions across sectors to prioritize investments?
Quantifying cost‑benefit and carbon‑reduction potential helps allocate scarce resources to the most effective AI projects.
Speaker: Ankur Puri
How can a global AI‑for‑power innovation platform (open‑innovation program, knowledge hub, solution database) be built and scaled?
A coordinated platform can accelerate the diffusion of AI solutions in power systems, especially in the Global South.
Speaker: Nalin Agarwal
How can AI be integrated into the built environment, materials innovation, and transportation to accelerate climate impact?
Expanding AI applications beyond energy to other high‑emission sectors is needed for comprehensive decarbonization.
Speaker: Uday Khemka
What is the net climate impact of AI when accounting for both its emissions (e.g., data‑center GHGs) and its mitigation potential?
Understanding the balance between AI‑induced emissions and AI‑enabled reductions validates the overall benefit of AI for climate.
Speaker: Uday Khemka
How can AI‑driven flood‑risk mapping (e.g., Flood Hub) be expanded and adopted by utilities, insurers, and NGOs?
Scaling flood prediction tools can improve resilience and inform climate‑adaptation strategies for vulnerable communities.
Speaker: Vrushali Gaud
How can AI‑based solar forecasting tools be transferred and deployed across different national grids?
Portable, high‑accuracy forecasting reduces reliance on backup generation and supports renewable integration worldwide.
Speaker: Dan Travers
How can open‑source AI tools for grid optimization be commercialized and scaled while maintaining openness?
Balancing open collaboration with sustainable business models can accelerate widespread adoption of grid AI solutions.
Speaker: Dan Travers
How can AI improve mitigation and resilience in food systems, which account for ~30% of GHG emissions?
Targeted AI applications in agriculture can reduce emissions and enhance adaptation to climate impacts.
Speaker: David Sandalow
How can digital public infrastructure in India be expanded to support climate‑focused AI across sectors?
Robust, accessible data platforms are foundational for AI innovations in agriculture, energy, and beyond.
Speaker: Spencer Low

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Public Interest AI Catalytic Funding for Equitable Compute Access

Building Public Interest AI Catalytic Funding for Equitable Compute Access

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel addressed the emerging “compute divide” that limits equitable AI development and explored ways to democratize access to hardware, data, and talent [4-6][8-10]. Deepali noted India’s AI mission, which is deploying over 38,000 public GPUs to build a sovereign yet open compute ecosystem for the Global South [14-18]. She positioned philanthropy as a catalyst that reduces risk, unlocks capital, and convenes partnerships to scale action [22-24][30-32]. Sushant introduced a new report from the Democratizing AI Resources Working Group, led by Dr. Saurabh Garg, and invited him to present its findings [40-48][50-51]. Garg identified six pillars-compute, capability, collaboration, connectivity, compliance, and context-and said compute scarcity is the chief barrier for AI ecosystems [68-73][75-79]. He proposed “Maitri,” a voluntary, modular digital public good designed to share affordable compute and support open-source models and governance [77-82]. Responding to a question on treating compute as a public utility, Garg advocated intelligent prioritization over rationing and highlighted a role for philanthropic funding to ensure affordable access [109-112]. Martin warned that new data centres may become underused without local data and open-source support, stressing that software ecosystems are as vital as hardware [126-133][138-140]. Vilas argued that traditional sovereignty concepts are too narrow and that AI diffusion requires active institution-building linking compute, data, talent, and policy, with philanthropy acting as an intermediary [163-176][184-191]. Dr. Shikha’s African compute demand index calls for 2.5 million GPU hours annually, highlighting a large gap and the need for investment readiness in power, talent, and governance [233-250][260-284]. She urged South-South collaborations, e.g., India supplying GPU hours to Burundi, but warned hardware alone is insufficient without clear use cases and capacity [292-298]. Shaun noted energy costs, latency, and data-sovereignty limit cross-border compute sharing, and suggested philanthropy aggregate demand, subsidise access, and address skills gaps [315-328][329-334]. The session concluded that coordinated institutional innovation, transparent governance, and targeted philanthropic support are needed to turn compute into a shared public good for the Global South [30-33][191][361-362].


Keypoints

Major discussion points


The emerging “compute divide” and India’s public-interest AI infrastructure – The opening remarks framed AI’s bottleneck as a lack of access to GPUs, cloud capacity and scalable compute, which will decide who shapes AI’s future [4-7]. India’s AI mission is presented as a concrete response, mobilising >38,000 public-sector GPUs to build a sovereign, open compute ecosystem [14-18].


A multi-pillar roadmap and the “Maitri” collaborative platform – The working group identified six foundational pillars (compute, capability, collaboration, connectivity, compliance, context) that must underpin democratisation [68-70]. To operationalise these, a voluntary, modular digital public good - Maitri (Multi-Stakeholder AI for Trusted and Resilient Infrastructure) – is proposed as a shared-compute and data platform that countries can adopt and customise [76-79].


Beyond hardware: data, open-source ecosystems and governance – Panelists warned that simply installing data-centres can create “white-elephant” assets if data and open-source tools are missing [124-131]. They highlighted that most open-source funding comes from large corporations, leaving critical dependencies under-resourced, and called for new financing models for the open-source stack [133-138]. Dr Shikha further illustrated the need for concrete demand metrics (GPU-hour indices) and investment-readiness assessments to avoid wasted compute [224-233].


South-South partnerships, investment readiness, and the catalytic role of philanthropy – The discussion stressed the importance of aligning compute supply with local use-cases (e.g., health, education, agriculture) and building “investment-readiness” capacities (power, talent, data, governance) before deploying hardware [260-270][292-298]. Philanthropic actors were positioned as risk-mitigators and conveners that can fund institutions, lower transaction costs and help prioritize public-interest projects [23][111-112][188-190].


Overall purpose / goal of the discussion


The session was designed to move from diagnosing the global “compute divide” to outlining concrete actions: (1) defining what AI resources need to be democratized, (2) exploring how South-South collaborations and catalytic financing can accelerate equitable access, and (3) identifying specific commitments (e.g., the Maitri platform, investment-readiness frameworks) that can be landed within the year [24-33][27-29].


Tone of the conversation


Opening: Visionary and urgent, emphasizing the transformative promise of AI and the risk of concentration [1-10].


Middle: More analytical and technical, with panelists presenting data, frameworks, and critical concerns about under-utilised infrastructure and open-source funding [52-70][124-138].


Later: Collaborative and solution-oriented, focusing on partnership models, concrete indices, and the role of philanthropy, while maintaining a hopeful, forward-looking attitude [224-233][292-298][361-362].


Overall, the tone shifted from a high-level rallying call to a detailed, pragmatic dialogue about how to operationalise equitable AI compute across the Global South.


Speakers

Deepali Khanna


Role/Title: Senior leader at the Rockefeller Foundation (speaker/host)


Expertise/Area: Philanthropy, AI democratization, public interest AI infrastructure


Sushant Kumar


Role/Title: Moderator/host; partner at Kalpa Impact


Expertise/Area: AI policy, compute access, report coordination


Dr. Saurabh Garg


Role/Title: Secretary, Ministry of Statistics and Programme Implementation, Government of India [S3][S4]


Expertise/Area: AI governance, public-interest AI resources, compute infrastructure


Dr. Shikha Gitao (also referred to as Dr. Shikoh Gitau)


Role/Title: Founder & CEO, Kala AI [S1]


Expertise/Area: AI for development in Africa, compute demand indexing, South-South partnerships


Andrew Sweet


Role/Title: Vice President, Rockefeller Foundation; panel moderator [S17]


Expertise/Area: Philanthropic strategy, AI policy, convening multi-stakeholder dialogues


Martin Tisné


Role/Title: Founder, Current AI; Public-Interest Envoy for France’s AI Action Summit [S15]


Expertise/Area: Multi-stakeholder AI initiatives, open-source AI, data sovereignty


Vilas Dhar


Role/Title: President, Patrick J. McGovern Foundation; member, UN Secretary-General’s High-Level Advisory Board on AI [S12]


Expertise/Area: Philanthropy for AI, global AI governance, institutional building


Shaun Seow


Role/Title: CEO, Philanthropy Asia Alliance; former executive at Temasek and Mediacorp [S9]


Expertise/Area: Collaborative philanthropy in Asia, compute infrastructure financing, impact investing


Additional speakers (not in the provided list):


– Shri Abhishek Singh – acknowledged leader, likely senior government/partner representative.


– Charu – noted for extensive work organizing the session (no title given).


– Dr. Sarabgarg – appears to be a misspelling of Dr. Saurabh Garg (already listed).


– Dr. Gorf – mentioned briefly when handing over; role not specified.


– Velas – referenced as a colleague knowledgeable on data stewardship.


– Anish – partner at Kalpa Impact (no title given).


– Jennifer – partner at Kalpa Impact (no title given).


Full session reportComprehensive analysis and detailed insights

The session opened with Deepali Khanna reminding participants that the promise of artificial intelligence is now being limited by a “compute divide” – unequal access to GPUs, cloud capacity and scalable infrastructure that will decide who gets to shape AI’s future [4-7]. She argued that democratization must move beyond “catch-up” to expanding the pool of leaders, ensuring that breakthroughs are not confined to a handful of geographies but reflect diverse languages and lived realities [8-10][14-18][19-21].


Khanna then framed philanthropy as a catalyst that can reduce risk, unlock capital and convene unlikely partnerships to move from diagnosis to action, outlining a three-act agenda: defining what resources to democratize, exploring South-South partnerships and catalytic financing, and securing concrete commitments within the year [22-28][30-32]. This set the tone for a shift from high-level rhetoric to concrete operational steps.


Sushant Kumar introduced the newly-released report “Opening up computational resources for new AI futures”, produced by the Democratising AI Resources Working Group under Dr Saurabh Garg’s leadership [40-48][50-51]. Dr Garg (who serves as Secretary of the Ministry of Statistics and Programme Implementation and previously led the Aadhaar technology stack) then presented the group’s findings [S1].


Dr Garg began by reaffirming that AI will transform the world but that the transformation must be equitable, inclusive and aligned with the public interest [56-58]. He recalled the AI Summit’s three guiding sutras – people, planet and progress – which frame AI as a tool for human welfare, sustainable development and shared prosperity [59-62]. From the working-group discussions emerged six foundational pillars that should underpin any roadmap: compute, capability, collaboration, connectivity, compliance and context [68-70]. He identified today’s primary barrier as access to GPUs, accelerators and high-performance clusters, and argued that the solution must make compute distributable, affordable and reliable rather than concentrating it in a few geographies [70-71][75-76]. To operationalize this, the group proposed “Maitri” (Multi-Stakeholder AI for Trusted and Resilient Infrastructure), a voluntary, modular digital public good that countries can adopt, customize and build upon to share compute, data and governance resources [76-82]. Garg also warned that future AI models may become smaller and domain-specific, potentially easing the compute burden, and highlighted the deeper issue of energy consumption – comparing the power needed for a human to 100-watt bulbs – suggesting that focusing on model efficiency could be as important as expanding hardware [84-89].


When asked whether compute should be treated as a public utility and if access ought to be rationed, Garg replied that the focus should be on “intelligent prioritization” of the shared platform rather than strict rationing [109-110]. He emphasized that philanthropic organisations have a key role in supporting this prioritization, helping to ensure that affordable compute capacity is available for public-interest projects [111-112].


Martin Tisné shifted the discussion to the risk of “white-elephant” data centres that sit idle without contextual data and open-source tools [124-128]. He argued that compute alone is insufficient; without locally relevant datasets, linguistic diversity and robust open-source ecosystems, the hardware will not translate into impact [128-132]. He further noted that most open-source funding comes from large corporations, leaving critical lower-tier dependencies under-resourced, and called for philanthropic financing to sustain these essential components [133-138][140].


Vilas Dhar used the Indian Premier League as an analogy, but warned that the Westphalian notion of sovereignty – treating compute as a territorial asset – is too narrow for AI [165-170]. He advocated an “active impact” model of AI diffusion, where institutions deliberately link compute to concrete local problems (health, education, agriculture) rather than relying on trickle-down economics [179-186]. According to Dhar, new intermediary organisations – such as Kalpa Impact – are needed to connect governments, philanthropies and innovators, enabling shared-prosperity interdependence rather than isolated national capacity [188-191].


Dr Shikoh Gitau presented a quantitative approach, unveiling a Compute Demand Index and an AI Investment-Readiness Index for Africa. She estimated that the continent needs 2.5 million GPU-hours annually (rising to 7.5 million over three years) but currently possesses only about 5 % of that capacity [233-246]. She highlighted that, in many African workshops, only a handful of participants had ever touched a GPU, underscoring how compute remains a concrete, unfamiliar resource for many developers [233-236]. Gitau stressed that hardware must be matched with power supply, talent, data and clear use-cases; otherwise donated GPUs risk becoming idle, as she observed in past digital-transformation projects where equipment was never powered [274-283][292-298]. She called for South-South collaborations – for example, India supplying GPU hours to Burundi – but insisted that such exchanges be tied to specific development outcomes [292-298].


Shaun Seow challenged the primacy of compute, arguing that energy availability and latency are deeper constraints. He noted that the bulk of GPU performance is owned by the United States and China, while India contributes only about 1 % [317-320], and that sharing compute across distances such as India-Indonesia would incur prohibitive latency of 50-100 ms, which is just not going to work for real-time AI workloads [325-327]. Seow suggested that philanthropy could aggregate demand across organisations to negotiate better cloud pricing and subsidize access for startups and impact-focused entities, thereby addressing both cost and skills gaps [332-334].


Across the panel, there was strong consensus that democratizing compute is essential, but the speakers diverged on where to allocate resources. Deepali, Garg and Kumar foregrounded large-scale GPU deployment and intelligent prioritization, whereas Martin, Gitau and Seow highlighted data, open-source sustainability and energy as equally limiting factors. Similarly, while Deepali framed India’s effort as a sovereign capability, Dhar argued for a relational, interdependent model of AI diffusion that moves beyond Westphalian sovereignty [165-170][179-186].


The draft report is publicly available and participants have until 31 March 2024 to read it, provide feedback, and submit reactions [363-364]. The panel identified four pillars for equitable AI:


1. shared, modular compute platforms such as Maitri;


2. quantitative demand-readiness tools;


3. robust, context-sensitive governance frameworks;


4. catalytic philanthropy that funds open-source dependencies, builds intermediary institutions, and aggregates demand to lower costs [30-33][77-82][111-112][188-191][332-334].


The session closed with an invitation to continue South-South partnership dialogues and to translate compute capacity into tangible public-interest outcomes.


Session transcriptComplete transcript of the session
Deepali Khanna

to be with us, so thank you. We are here because we believe in AI’s transformative potential, and I’m certain you’ve heard a great deal about it over the past few days. Today, this session is about something deeper. The digital divide is rapidly becoming a compute divide. AI today is not constrained by imagination. It is constrained by infrastructure, by who has access to GPUs, to cloud capacity, to scalable compute. And that divide will determine who shapes the future of AI. Democratization in this context is not about catching up. It is about expanding who gets to lead. It is about ensuring that the next generation of AI breakthroughs are not concentrated in a handful of geographies, but are shaped by diverse talent, languages, and lived realities across the world.

And here, India is not waiting for permission. India is not waiting for permission. India is showing that it can be done differently. Through the India AI mission and through the compute capacity plan, mobilizing more than 38 ,000 GPUs as public infrastructure, India is building one of the most ambitious public interest compute ecosystems anywhere in the world. This is not incremental reform. This is infrastructure at scale. This is sovereign capability combined with openness. India is demonstrating that public interest AI infrastructure can be built in the Global South by the Global South and for the Global South. And this leadership matters because equitable access to compute is not just about hardware. It sits alongside access to data, open source models, talent, and institutional capacity.

India is proving that you can design AI ecosystems that are both globally competitive and globally competitive. And locally grounded. At the Rockefeller Foundation, we believe this moment requires moving from diagnosis to action. Philanthropy’s role is to be catalytic, to reduce risk, unlock capital, and convene unlikely partnerships that accelerate progress. Over the next hour, our discussion will unfold in three acts. First, what exactly are we democratizing? That’s an important question. Second, how do South -South partnerships and catalytic financing accelerate progress? And third, what concrete commitments can we land this year? If India’s example shows us anything, it is this. Democratization is not theoretical. It is operational. It is scalable. And it is already underway. The question now is how we accelerate it together.

Before we begin, let me take a moment to acknowledge a few leaders in the room. Shri Abhishek Singh who unfortunately has been pulled into another meeting but his leadership has been amazing his steadfast partnership and support has been something that I am extremely grateful for his vision of guiding this important work with clarity has been just spectacular Dr. Sarabgarg we are honored by your presence you have been in sessions since this morning and thank you for your leadership it’s truly a privilege to have you with us today my colleague Andrew Sweet who has joined us from across the world one of the sharpest lines of the Rockefeller Foundation and truly a force for good thank you for being with us today and supporting this conversation and of course I want to also thank Charu who has been working endlessly and very hard to kind of get us to this place thank you Charu for your leadership Martin, Vilas, Sean and Dr.

Shikoh thank you for lending yourself your voice and expertise to today’s discussion Your perspectives will help ground this dialogue in both ambition and action, and I know all of you are action -oriented folks, so we’re going to have something really cool come out from here. And last but certainly not least, our partners at Kalpa Impact, Sushant, Anish, Jennifer, thank you for being extraordinary collaborators and for helping shape today’s session. It is now my pleasure to hand it over to Dr. Gorf. Please, over to you, sir, or maybe I’ll hand it over. Okay.

Sushant Kumar

Thank you. Thank you, Deepali. When I mentioned the report, I fumbled the name, so I’ll go again. Opening up computational resources for new AI futures, new AI world is possible. And this is something that the team has worked really hard over the last few months. And today is an opportunity when we release a working version of that report and invite inputs, feedback, comments, and suggestions, which we will work through over the next few months. This research helped us think through and work with the Democratizing AI Resources Working Group under the leadership of Dr. Saurabh Garg. And he’s here. So it’s a pleasure and a privilege for us to invite him. And the other panelists to release this report.

Thank you. Thank you. Thank you. Thank you. opening up computational resources or in fact all resources that are necessary for development of AI in public interest and for real world impact. I could think of no better person than Dr. Saurav Garg under whose leadership I think we have come a long way in not just the intellectual thinking but as he will tell you in terms of operationalizing how we can bring this to life for billions in the global south and also the other countries in the world. Dr. Saurabh Garg, please for your keynote.

Dr. Saurabh Garg

Thank you and colleagues panelists great to be here and great to see a large kind of attendance that we have seen over the past few days in the AI summit and And there were seven working groups set up under the AI Summit umbrella. And one of them was on democratizing AI resources. I had the privilege to chair that group along with Kenya and Egypt. So I’ll obviously talk a bit on that. But before that, just to say that I think all of us are of the opinion that AI will definitely transform the world. I think the question is whether that transformation would be equitable, would be inclusive and aligned with public interest. And I think that’s really the issue which concerns a lot of people.

The AI Summit itself was built around three guiding sources. Sutras, the people, planet and progress. And therefore, the concept being that AI ultimately must serve human welfare. advance sustainable development and enable shared prosperity. I think these would be key background in the way these sutras were developed. And obviously, democratizing many of these resources would be key to that. During our working group discussions, we had the opportunity to talk to a large number of countries, people from academia, civil society, and other international organizations. And I think one consistent message was that most countries are not really seeking only access to AI, but also seeking agency in AI. And I think that’s key. And how the AI systems need to reflect each country’s own development priorities, languages, and social contexts.

From these discussions, there were six foundational pillars that we had to address. And we thought need to form the backbone of the collective roadmap for the future. computer capability collaboration connectivity compliance and context and I’ll just briefly speak on each one of these a bit compute no doubt is today’s defining barrier the access to GPUs accelerators high -performance clusters is a major issue for all AI ecosystems but the issue is how it can be made distributable affordable and reliable across and not concentrated in a few geographies and this would no doubt require us to look at whether compute can become a shared infrastructure in future or kind of a which supports public interest innovation and to the extent that we are focusing on innovation how that part can be a public interest infrastructure secondly infrastructure structure would not be sufficient there is a widening skills gap.

So how we can consider capability diffusion focusing on joint research, shared standards, open platforms and mutual learning. What needs to be done for this responsible deployment is so that we can link innovators to compute resources and citizens to trustworthy AI enabled services. Equally important would be governance. The governance framework needs to be robust enough to build trust, yet flexible enough to adapt to diverse social and cultural contexts. Open source and maybe modular AI stacks would help in enabling localization without creating dependency. So looking at some of these issues, on what mechanisms can be done to facilitate accessible and affordable computing resources by improving utilization rates and reducing transaction costs and also to lower barriers for access regardless of geography.

The working group looked at how this can be taken forward through a collaborative platform designed to expand shared access to compute data in partnerships. And the platform has been termed as Maitri, which is friendship in Hindi. Maitri, M -A -I -T -R -I, standing for Multi -Stakeholder AI for Trusted and Resilient Infrastructure, to be developed as a digital public good that countries can adopt, customize, and build upon. And obviously, it is a non -binding, voluntary, modular approach. depending on the context of each country, what kind of compute and what kind of methods can be used to have accessible, at least for innovators and researchers looking at data sets that can be put out, which are take care of the national laws and national protocols in place and look at models.

So what which are open source and which can be placed. So this this we envisage would help to at least ensure that portions of AI are a global public good, because we we we are focusing on innovation and research out here. And this would go beyond just a focus on hardware and platforms, but also in skills, institutions and governance capacity. I would just like to mention one area. The other is that the technology is a very important part of the development of the technology. but how perhaps it might proceed in future is also the fact that while infrastructure or compute seems to be the biggest constraint going forward as of now, that’s perhaps also based on the present models requiring large amounts of compute capacity and energy.

Going forward, would models retain this system of algorithms that they have, or would there be obviously small domain -specific niche models? I think yesterday there was a very nice remark made by Vishal Sikka, who mentioned that unlike when we talk of compute infrastructure, we are talking in terms of gigawatts, nothing less than that of whatever. But when you talk of a human being, you talk in terms of only 2 ,000 calories requiring a human being to sustain a computer. Which is not more than a 100 -watt bulb. for a day. So are we missing something out here and I think that’s a very important point that he made yesterday and that’s why the focus I think we need to have much more on the models and that itself might solve a lot of the areas that we are and when we’re talking of democratizing AI perhaps that’s the path forward.

So I’ll stop here and thank you all. Thank you for this opportunity.

Sushant Kumar

We now transition to the panel discussion and may I request Andrew Sweet, VP at the Rockefeller Foundation who is the moderator for the panel. Please join us here on the stage. May I request other panelists, Dr. Shikogitao, Martin Martin Tisney Vilas to join us on stage. Yes and Sean sorry Sorry Andrew over to you

Andrew Sweet

thank you Dr. Garg for those inspiring remarks and for the framing insight and perspective that you bring to this conversation all of the many conversations that you’ve had throughout the course of the week so we’re excited to continue and deepen the conversation today and very excited that we have five of the world’s brightest minds to discuss this topic these are all people that have been in the AI arena for decades, this is not new to them and all people that have deep regional expertise and global perspectives so very excited for this conversation today. We don’t have a lot of time we have about 25 or 30 minutes for the conversation so we’re going to dig in, we’re not going to have a number of speeches, Dr.

Garg’s speech will be the only speech that you’ve heard today but we’ll have a short series of provocations with actionable ideas for how we can move this agenda forward And so hopefully this conversation can be, you know, informal, back -and -forth banter. I think we’ll have one round of questions, but it would be great if we could kind of feed off of each other’s questions and energy because I know we all have a lot to say here on the panel and a lot of expertise to share. So I’ll briefly introduce the panelists, then we’ll dig in. You’ve already met Dr. Garg. He’s the Secretary of the Ministry of Statistics and Program Implementation for the Government of India.

He has been instrumental in shaping India’s AI governance and previously led the technology stack for the Transformative Adhar Initiative. We have Martin Tisné, founder of Current AI and public interest envoy for France’s AI Action Summit. Martin has spent 15 years building multi -stakeholder initiatives like the Open Government Partnership that we talked about earlier today to govern technology based on democratic values. We have Vilas Dhar , president of the Patrick J. McGovern Foundation. Vilas serves on the UN Secretary General’s High -Level Advisory Board on AI and leads one of the world’s largest philanthropic movements to AI for public purpose. my friend Dr. Shikoh Gitau founder and CEO of Kala AI a visionary from Kenya. She established Safaricom Alpha and has been a leading voice in ensuring that digital transformation in Africa solves real problems in education, healthcare and agriculture and finally, Shaun Seow CEO of Philanthropy Asia Alliance.

Sean is working to catalyze collaborative philanthropy across Asia, leveraging deep expertise from his time at Temasek and is CEO of Mediacorp so we’ll continue the conversation first question will go to Dr. Gerg India has launched the India AI mission with a target of 38 ,000 GPUs if we view compute as a public utility, much as we do with water and electricity what is the governance model that India is envisioning and should compute access be rationed or priced differently for public interest applications

Dr. Saurabh Garg

so I would say that the focus is not on rationing but on intelligent prioritization I think that’s going to be the focus, that the impute capacity is an enabling platform, and as I mentioned, as a digital public good, at least that’s where innovation and research is going. So that we focus, and I think that’s where a lot of the philanthropic organizations would have a large role to play, given that their focus is also on ensuring that AI benefits all. So with that focus in view, how governments, philanthropic organizations, and the private sector can collaborate to ensure that affordable compute capacities are accessible to all. I think that’s the models that we are looking at, and that will ensure experimentation going forward.

Andrew Sweet

Thanks, Dr. Garg. Martin, I’ll go over to you. Through current… AI and the Paris Charter, you’ve convened governments to discuss public interest AI. How do we move nations from being consumers to genuine co -creators? And quickly, you’ve also spoken about this looming data bottleneck. What do we do to unlock data sets for training without compromising privacy?

Martin Tisné

Okay, two big questions. Thank you. So, as you mentioned, we launched Current AI last year. We’ll be launching just this afternoon our first product, which is an open hardware product looking at linguistic diversity. I think I’ll be a little bit provocative to maybe start our session. I think compute is critical for obvious reasons. I think that from a financial, from an innovation, and from a sovereignty perspective, it is also possible to overplay it. I’ll tell you what my worry is, and I’d love to know what the panel thinks. I do have a worry that we could end up in a few years’ time in a world where we succeed in having compute capacity in inverted commas, in a number of countries, including in the global south, but where effectively the data centers are not used.

We’ve been talking to colleagues around the world. You do also have data centers that are effectively kind of white elephants and that are not used anywhere close to full capacity. And so I think for countries to be able to exercise sovereignty, they need to have contextual AI. They need to have contextual data in their languages with all of the diversity and the incredible richness that typifies their cultures available in order to create contextual localized AI that actually serves outcomes that people care about. And so while the compute piece is important, I think it’s one part of the issue. We need to talk about the data piece and we need to talk about the second part is the open source one.

So briefly, I think throughout the event, people talk about open source AI, that it’s a really good thing, that we’re all pro it. I think we also need to talk about how, from a philanthropic perspective, we resource the open source ecosystem. The reality of open source software is it’s mostly the top tier of open software is funded by large companies that are using it, right? Linux is partly funded effectively by volunteers working for SpaceX that are using it. There’s a bottom tier of dependencies in open source that are run on a shoestring, you know, by a few like critical, amazing people working overnight as volunteers. And there’s very few organizations, one of them, which is a part of the current AI roost, which looks at robust open source trust and safety, that are funding those critical dependencies.

So I think that for states across the world, in the global south and the north, to really be able to exercise sovereignty, and I’d love to talk about this a bit before, but I don’t want to hog the mic, we need to talk about compute, but also we need to be realistic about what the compute is going to be used for. So I think the data piece and the open source piece are really important. I think I’ve probably run out of time to talk about the data bottleneck.

Andrew Sweet

Go for it.

Martin Tisné

Well, so the number… There are people in the room I’ve worked with for a long time on this issue. Velas, you’re one of them. Sushant, you’re another. I won’t name check everyone. I think it’s fantastic that there’s been so much innovation in compute and we’ve seen such change over the past 10 years. In contrast, I think it’s a complete tragedy that we haven’t seen as much, anywhere near as much innovation when it comes to data and specifically the ability for people to be able to share personal data in ways that both respect privacy and contribute to outcomes. And that’s effectively it. I think we need a huge amount more resources and thought, both when it comes to the technical side of the issue, and here, other than the side, I think that partly it’s solved, but enterprise users of AI have access to these kind of technical safeguards in a way that private users don’t.

And there’s a story that we can talk about if we have time. And then on the governance side, so for example, Velas, you and I have talked for a long time about different, and now there’s different forms of data stewardships, whether data trusts or others. To the day… I haven’t seen one that really scales to the level that we would want to see it scale. to and that I think we need a lot more resources, a lot more thinking there’s been work done but if we could harness even 20 % of the sort of like brain capacity of the world that’s going into compute right now I think we would be in a very different place. Thank you.

Andrew Sweet

Excellent, thanks Martin. Actually Vilas I’ll go next to you because I think this reminds me a little bit about a recent article you wrote about the Indian Premier League as a model for how India builds world class institutions I re -read it this week in preparation for this conversation is there a similar IPL playbook for public interest compute or is the window for building these public institutions closing as commercial consolidation accelerates?

Vilas Dhar

Well I can’t think of a more controversial topic to spend our time here in this conversation than cricket. It’s been a good week all around but I think many of the people in this room probably know. Before I start I just want to say Dr. Garg I want to acknowledge in particular your leadership on this work. I spend a lot of time with senior decision makers across governments and the conversations that we have had have really given me great hope for the combination of technological confidence but also an understanding of what this means across an ecosystem. And so I want to acknowledge your leadership in particular. Thank you. Look, this question around the IPL I think is great, right?

I mean, let’s not torture the analogy and take something really fun and then try to, like, tie it to AI. But here’s what I’ll say about it. I think in many ways what we need is a new institutional framework that goes from the elites who are participating in their own places to something that feels deeply participatory. And I think around compute infrastructure in particular, we are stuck in a model where we keep reengaging and renovating old concepts and try to describe a new world. I will tell you sovereignty has been the buzzy word of the moment, right? Everybody wants to talk about sovereignty and diffusion. Sovereignty as a Westphalian concept that goes back a few centuries tries to take the idea that ownership of pieces of silicon somehow magically results in outcomes and impact that transform lives.

Now, there are logical links. And, of course, there are codependents. competencies, but to simply say that we will site compute in a particular geography and so figure out a way to disconnect ourselves from the interdependence of the 21st century doesn’t really bring us to a good outcome. I’ll tell you the second part of this, AI diffusion. If you haven’t heard this already, every tech CEO here, this has been the buzzword of the moment. I spent some time yesterday with the prime minister and a number of tech CEOs who wanted to talk about their investments in India. Those investments in many ways followed the playbook of the PR press release. They were, we’re going to build a new data center, we’re going to invest in a new compute capacity.

But when you dig deep and you ask the next question, who will this really benefit? What value does this create for public impact and outcome? How does putting a large number of servers in a particular place result in that community finding an economic uplift, a benefit in economic opportunity, a sense of dignity? The conversation sometimes falls. flat. So AI diffusion to me in its core concept, the idea that you hyper concentrate technological capacity, compute data, and somehow the rest of the society benefits sounds a little too much like something that as an American I know too intimately as trickle -down economics. The idea that if we made the rich as rich as possible somehow the benefits would filter down to everybody else and it would work.

AI diffusion is a passive concept. It starts on the premise that we build technological capacity for a few and somehow it works out for everybody else. But there’s an alternate model and it ties directly to this report that’s been issued today and the work that we’ve been talking about. For AI to benefit everyone requires a direct and active impact. It requires us to step in and say what are the institutions we have to build that actually physically and metaphorically transform the idea of compute infrastructure to be something that everybody can use. It requires us to build the institutional layers and the capacity that lets a community that’s trying to solve a local problem know that compute isn’t the thing that holds them back.

rather the conceptualization of the problem the aggregation of the full stack of resource sets as Martin described that include compute that include data governance mechanisms that include the political agency of communities to participate and let us then turn that into that final app solution infrastructural development that actually leads to the outcome we’re solving for in many ways I think this is the great role of the institutions that are represented here on the stage and in this room for philanthropies to transform the capital landscape in a way that says great entrepreneurs and leaders like my dear friend Chico here and so many here in India that are building open source public access AI stacks don’t have to worry about the resource constraints of the private capital markets that they know that they can access governmental and substantive structural resources that let them build the tools that they want to and know that they have equitable access to the markets as well as a matter both of policy and as a product that they can go out and get to consumers and creators that they can provide a service that lets them people use it at scale.

And the last part of this, and I have to say this, is this doesn’t happen, as we’ve discussed in the private market, but it also doesn’t happen exclusively by going to frontline nonprofits and saying, now you’re supposed to be the builders. It requires us to innovate a new institutional set of intermediaries. I think of groups like Culpa Impact, which I think is an incredible example of a combination of technical sophistication, policy impact, support for government, that actually sits at the layer that connects these different elements and lets us build on top of it. I think this is the work that’s ahead. If we really think about pragmatic outcomes to this conversation, Andrew, I think one of the questions we might ask is, what are the institutions we need to build in the next 12 months that connect the dots around all of these different pieces and support this transformation at scale?

Andrew Sweet

Dr. Shikha, I came across a recent article that you put out there saying that for the West, AI is a matter of efficiency, but for you, it’s a matter of life or death. You’ve been a champion for AI access. You were very active in this summit. You were very active in the Kigali Summit. We were together at the launch of the first ever AI factory for Africa together in April in Kigali. You’ve also talked about global tech companies. If they want African data, they should provide compute infrastructure in return. How do we formalize these reciprocal agreements, and what does a true India -Africa partnership look like that doesn’t just reciprocate global North -South models, similar to what Vilas was just talking about?

Dr. Shikha Gitao

Thank you very much for having me. It’s always fun to listen to everyone here on this. I was hoping somebody was going to preempt some of the work that I was going to talk about, but lucky for me, I have some stuff to talk about. Thank you, Vilas. So when we talk about compute, it’s this amorphous thing. In fact, we launched an AI research lab in Nairobi, and we have some GPUs there. And one of the key things was like a demo showing up what a GPU is. And our PS was like, oh, my God, this is what a GPU is, because he’s never seen a GPU. And then I made sure, like, every time I’m speaking, I’m asking, how many of you actually have seen a GPU, not on the Internet, touched one?

Maybe five people. And this is everywhere. Every single room that we’re talking about compute, we ask the same question, have you ever seen a GPU? And so right now, five to ten people. So it’s this thing that people talk about. We need GPUs, we need compute, we need all of these things. And for us, it is very important, as an African continent, we had, like, our research. Colo came for the Global South a few days ago. Same question, how many of you have seen a GPU? about 10 people had never touched a GPU. How many of you need compute? Everybody raises their hand. But what does that actually mean? In fact, one of the panels, the starting point was like, when it comes to compute, we all need Jesus.

And I thought, how do we quantify this? So we, and I think we have already spoken to Calpa about this. We’re working, I think, on the same time about a framework. So we just released a compute demand index. Because we realized every time we speak about compute, people have ideas, they have thoughts, they have proposals. They don’t have the numbers to say that. We need GPUs. How many? We need megawatts. And the gigawatt, megawatt conversation, what does a gigawatt of compute actually mean? So we went ahead and said, for Africa, we need to, every time we’re having conversations with these governments, this is actually what you need. But you actually need to put money into it.

So we, our first index was, did it demand? And the second one is, is your country ready for this? which we are calling AI Investment Readiness Index. So I’ll give you some numbers. Africa needs 2 .5 million hours of GPU hours a year, 7 .5 million for the next three years to be able to start computing well. This is for training as well as research. That is something that I can work with. So when I come to India and say I need 2 .5 million GPU hours a year, how many of them can you give me? And we have this conversation with the UNDP in Italy, and they said, oh, we have 1 .5 million GPU hours that we can donate. We have 1 .1 million more to go.

Cassava is saying we are putting 2 ,000 GPUs. How many hours of GPUs with those GPUs, hours, not actual physical, how many hours of GPUs without those 2 ,000 GPUs actually provide for the continent? So we need to be. We have to start being very practical rather than being arbitrary on what we. want. Of these 7 million GPU hours we need in the next three years, Africa only has 5 % of that. So we are doing the math. We only have 125 ,000 of these GPU hours a year, which is like times three of that for the next three years. So you’re solving, when I go to villas, I’m saying I need these GPU hours. It’s very practical as I’ll say I can be able to do half a million GPU hours.

So it’s not just going with an arbitrary number. I need GPU hours to be able to put this. And for us, that is important. But for me, it is the conversation about investment, and that’s the conversation that we asked. How do we have this South -South collaboration? How does we have this India collaboration? How does it actually look like? There’s the paradox. Everybody, as he said, as Martin said, everybody wants sovereignty. Everyone wants to talk about diffusion of AI. But what does that actually mean? Do we actually need it? So I’ll give you two examples of my two favorite countries. Hopefully none of them is here, actually in Nigeria. So there’s something you’re calling the Nigerian paradox.

Nigeria is the number one country in the computer demand index. Why? Nigeria is doing very well. It’s 110 million Internet users, a huge population. They’re doing very well in terms of, like, e -commerce, financial services. So they’re up there when it comes to why they need compute. And we’ve seen this in India. India is very high there as well when you’re doing the same exact thing. But what about investment readiness? And investment readiness is are they able or capable of running a compute facility? Do they have power? Do they have the talent? And I love, I think, what Mateen and the minister spoke about, what we think about. When you think about compute, you don’t think about just GPUs.

It’s a whole stack of things. It’s talent. It’s governance. It’s all these things. And you’re thinking about. When you think about investment readiness when it comes to compute, you have to look about all these things. Because I can give you GPUs, as he said, and I’ve worked in digital transformation for the last 20 years, and I work for the AFDB as a digital transformation lead, and we’ll buy computers and go three years later, and they have never been powered at all. And that’s the case that is going to be with GPUs, because you’re going to give countries these GPUs. If they don’t have talent, they don’t have the power to run it, they don’t have the data sets, they don’t have models, they don’t have use cases to build on top of it, you’re wasting that money.

And that’s where the investment readiness comes in. So we’re talking to countries, and we’ve had this conversation with African countries, is there’s no point of investing all your dollars in putting a compute facility. Get your talent ready. Get your data sets ready. Have strong use cases that people can back. Then, with all of that, can we then define what are the demands? Can we make money? you need. You don’t need a gigawatt Kenya, you do not need a gigawatt of compute to be able to run. Maybe you need a 200 megawatt facility and that’s where we want. So coming back to the question, how do we interact with India? This is our demand. Burundi might need 50 megawatts of GPU.

Can India be able to facilitate that? But it’s not just about facilitating the GPU, it’s what is the GPU in service of solving for health, education, agriculture. And when you have clear use cases, then the GPU demand becomes an obvious ask. And I think bridging that and convincing governments especially of bridging that gap is what we need to be able to do. And then the governance framework actually comes to play. Thank you. I know that’s a lot.

Andrew Sweet

That’s great. Thank you. Thank you, Dr. Shikha. We’ll go through Sean, and then I want to keep it kind of informal for the remaining ten minutes after Sean speaks, so any reactions to any of the comments, and then we can do a lightning round if we have time, but if we don’t have time, that’s fine as well. Sean, over to you. The Philanthropy Asia Alliance brings together 80 members and partners to address Asia’s interconnected challenges through collaborative philanthropy. Is there an opportunity for Asia’s philanthropic networks to coordinate shared compute and infrastructure, pulling resources from places like India, Indonesia, and other nations, rather than competing, and what would unlock that collaboration?

Shaun Seow

Thanks, Andrew. The advantage of coming last is that I could say I agree with all of them. Actually, I’m going to add to the much maligned word called compute. Maybe we could end the panel right away. I’m going to join Martin in actually agreeing that compute is actually a bit overrated. The ownership… of compute… So when you think about the stack, I’m going to add another way to frame the conversation, Jensen Huang’s AI stack. When you think about energy, hardware, compute models and applications and the top layer applications is really what will drive and value capture for the economics as well as the impact, social impact. Really the stumbling block is probably energy at a bottom level.

And thankfully for many countries in Asia, the costs have been driven down because of the abundance of hydro, solar and wind. Then when you think about the next layer of hardware, I mean that’s obviously dominated by China, Chinese and American players. And when you think about the compute level, I understand why we fuss over compute because the Americans own 75 % of the GPU cluster performance. The Chinese 15%, Europeans maybe about 4%. and the rest of us are only like 0 .1%. I think even India is just 1 % of that. But I think the issue is actually deeper than just the ownership. I mean, if you think about what it needs to get the work done, it’s more access. So the question you’ve posed me about sharing of compute, for example, between Indonesia and India, I live in Southeast Asia, and that’s why Indonesia is like a couple of hours away from where I live.

And we know the situation in Indonesia quite intimately. There’s data residency requirements, and that’s why there’s a build -out of data centers. Think also of the physical limitations of actually the latency of sharing compute between India and Indonesia. For example, 10 ,000 kilometers apart, when you think about the latency of what, 50 to 100 milliseconds, it’s just not going to work for the sharing of compute between Indonesia. and India. Attractive as the idea is, it doesn’t work. I think they’re just physical limitations, data sovereignty, privacy issues that prevent that from happening. So I just want to look at the positive side of what’s happening. When you think about the cost of compute is coming down, when you think about the emergence of new clouds, GPU for a service, I think these developments are actually going to be good for the unleashing of AI, for social impact, for economic capture.

So the way you can think about it is, how do we then make it a little bit more accessible for startups, for impact organizations? Maybe the way to think about it is really, how do you think about aggregating demand so that you can actually negotiate with the new cloud providers and get a cheaper pricing? How do you then think about philanthropy coming in? And to subsidize some of the compute costs. and I think I kind of agree more with the observation that you really need to go beyond just infrastructure you need to think about the ecosystem you’re building I think the skills gap in Asia is actually huge and that could be really what’s stopping us from really optimizing maximizing the power of AI in what we want to do Is that too long?

Andrew Sweet

No, that’s perfect I’m not sure if anybody wants to react to any of that Martin, I see you scribbling furiously maybe first reaction to you

Martin Tisné

No, I am scribbling. I’m scribbling because I’m thinking about your points I’m thinking about the points of the panel and I’m thinking about the term sovereignty So my scribbles are to your point about the Westphalian concept of sovereignty that’s about the ability to make law within your territory and it’s a very global north concept and it’s a notion of territory which has physicality and I’m just what I was scribbling was the physicality of the territory, it’s like we’re very focused on the physicality as you were saying on the GPUs of the bricks and mortar, so we’re going to be okay because we’re going to be sovereign on this data centre, the data centre it’s on my territory and what got me thinking is other concepts of sovereignty such as when I was spending a lot of work working on data calamities and data stewardship, thinking about indigenous data sovereignty which is a different type of concept it’s a more relational concept than a territorial concept, right?

It’s about a pre -existing an inherent authority a relational authority over that which makes up a people, and so when we were studying for example indigenous data sovereignty in the Maori context in New Zealand the Maori community any data that in any way involves Maori, the Maori community … legacy is part of the Maori community so I think that there’s something here in thinking about a very in some ways a quite rigid approach to sovereignty which is about control as mentioned and one which is more about agency and which is more relational so that’s what the panel has got me thinking and I’ve been doing some writing and thinking with colleagues and friends around the notion not of a sort of like a controlled national stack but a global open resilient collaborative stack and that’s not one at all and just I’ll finish with that, that doesn’t mean that like all the data is open and anything goes and anyone can extract your personal data and you’re back in a sort of Zuboff you know surveillance capitalism world it’s one where it’s a question of choice and agency and the what you wish to exercise authority over and how.

That’s my scribbles. Thank you.

Vilas Dhar

As you can tell, when we get on a panel with people you love and respect, the conversation just flows. So I want to build off this point and a little bit of what you said, Chico. I want to take a different tack to this question of agency, which is if I had asked any development leader in the world 10 years ago, if you could have your dream of an extra gigawatt of energy capacity in your country, what would you do with it? I can’t imagine that any of them would have said, well, I want to use it to run a bunch of computation on things that may or may not have short -term economic value for my country.

Andrew, your organization has been incredible around the world at building capacity and grids in power production, in ensuring that people can use power for development. And yet somehow, I think for many of us, we are surrounded by conversations where now the question has become, how many megawatts and gigawatts can you put into compute for AI? It is a fundamental challenge when you think about what are our priorities in development. Again, going back to core principles of human rights and dignity and participation in the world, to say governments who have limited capacity should now all of a sudden be focused on this topic. It brings us again from this question and this shift. In many ways, I acknowledge that the traditional conversation around compute is one of breaking over -dependence on the American AI stack, on other international players that are coming in.

But the response to over -dependence isn’t internal dependence, it’s interdependence. It’s saying if there are places that have incredible capacity and even potential to drive, as Shiko said, the availability of compute hours, how do we build interconnectedness that lets that be a mutual value exchange? Not merely, again, clients who have to go to another country and say, please give or let us buy compute, but rather the products of that compute are going to be to build the infrastructure that you can then use in your own country. To allow for centers of excellence that allow for local capacity and local competence to drive what gets built and to allow that to be the new tokens of international trade in a way that leads to a much more connected and shared prosperity rather than descending back to that 200 -year -old concept of how do we make sure that we’re competitive in an adversarial frame.

I recognize that what I’ve just shared with you is maybe not where the dominant private sector conversation is. And to those who would oppose it, the primary critique is, well, that sounds quite naive. And yet we’ve seen it happen. We’re seeing it in the few areas of hope in the multilateral system where we’re actually finding that technology governance is something that brings everybody to the table, that lets people engage in meaningful shared outcomes. We’re seeing the seeds of it. The question is whether we’re going to let them sort of die out in the sun or if we’re actually going to water them, invest in them, in order to grow.

Andrew Sweet

Great. Dr. Garg, any final insights?

Dr. Saurabh Garg

I know there’s a little time, but just one thing I would say that perhaps we need to spend a bit more time going forward on the frameworks that will help ensure public interest frameworks looking at things beyond compute, looking at models, looking at talent, looking at data, how that can be shared and interoperable and in a manner which takes care of public interest. So I’ll just stop it out there.

Andrew Sweet

Well, thank you. Thank you to the Indian government. thanks to our partners at CalPA for putting this together especially for the authors this is now officially out there I think you have until March 31st to read the copies are available out there you have until March 31st to review the document and submit your reactions thank you to the panelists, really appreciate it enjoy the rest of the summit, thank you I think the NDIA team wants to hand over some souvenirs from the panel Thank you Thank you. Thank you. you you Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (23)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“AI’s promise is now being limited by a “compute divide” – unequal access to GPUs, cloud capacity and scalable infrastructure that will decide who gets to shape AI’s future.”

The knowledge base notes disparities in access to computing power and the self-reinforcing nature of the compute divide, confirming the reported limitation [S71] and [S31].

Confirmedhigh

“Philanthropy can act as a catalyst that reduces risk, unlocks capital and convenes unlikely partnerships to move from diagnosis to action.”

Building Public Interest AI Catalytic Funding for Equitable Compute Access describes philanthropy’s catalytic role in reducing risk and unlocking capital for AI initiatives [S1].

Confirmedhigh

“Dr Saurabh Garg serves as Secretary of the Ministry of Statistics and Programme Implementation in the Government of India.”

Multiple sources identify Saurabh Garg as Secretary in the Ministry of Statistics and Programme Implementation [S4] and [S3].

Confirmedhigh

“The AI Summit’s three guiding sutras are people, planet and progress.”

Both the keynote by Thomas Schneider and remarks by other speakers reference the three sutras of people, planet and progress as the summit’s guiding principles [S84] and [S86].

Confirmedhigh

“Today’s primary barrier to AI democratization is access to GPUs, accelerators and high‑performance clusters.”

The knowledge base highlights the compute divide and limited access to computing resources as a core obstacle for AI development [S71] and [S31].

Additional Contextmedium

“Democratization must move beyond “catch‑up” to expanding the pool of leaders, ensuring breakthroughs reflect diverse languages and lived realities.”

Discussion of inclusive institutions, cultural contexts, and the need for diverse participation adds nuance to the claim about moving beyond simple catch-up [S73] and [S74].

!
Correctionmedium

“Dr Garg previously led the Aadhaar technology stack.”

While the knowledge base confirms Dr Garg’s current governmental role, it does not contain any reference to his involvement with the Aadhaar technology stack, leaving that part unverified.

External Sources (86)
S1
Building Public Interest AI Catalytic Funding for Equitable Compute Access — – Shaun Seow- Dr. Shikha Gitao – Vilas Dhar- Dr. Saurabh Garg- Dr. Shikha Gitao
S2
Webinar – session 1 — Dr. Gitao’s forum delved into the multifaceted role of the internet within modern society, underscoring its key contribu…
S3
The Foundation of AI Democratizing Compute Data Infrastructure — 700 words | 130 words per minute | Duration: 321 secondss I’m Saurabh Garg. I’m secretary in the Ministry of Statistics…
S4
The Foundation of AI Democratizing Compute Data Infrastructure — -Saurabh Garg: Secretary in the Ministry of Statistics and Program Implementation in the Government of India
S5
Democratizing AI Building Trustworthy Systems for Everyone — – Dr. Saurabh Garg- Natasha Crampton – Dr. Saurabh Garg- Natasha Crampton- Justin Carsten
S7
S8
WS #2 Bridging Gaps: AI & Ethics in Combating NCII Abuse — David Wright: Thank you both. Yeah, amazing kind of explanation from the two people leading this. Thank you. Next, we’re…
S9
Building Public Interest AI Catalytic Funding for Equitable Compute Access — -Shaun Seow- CEO of Philanthropy Asia Alliance, working to catalyze collaborative philanthropy across Asia, has expertis…
S10
Inclusive AI_ Why Linguistic Diversity Matters — -Sushant Kumar- Session moderator/host
S11
Building Public Interest AI Catalytic Funding for Equitable Compute Access — – Dr. Shikha Gitao- Andrew Sweet- Sushant Kumar
S12
Building Public Interest AI Catalytic Funding for Equitable Compute Access — thank you Dr. Garg for those inspiring remarks and for the framing insight and perspective that you bring to this conver…
S13
https://app.faicon.ai/ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — We have Vilas Dhar , president of the Patrick J. McGovern Foundation. Vilas serves on the UN Secretary General’s High -L…
S14
A Digital Future for All (afternoon sessions) — – Vilas Dhar – President and Trustee, Patrick J. McGovern Foundation Vilas Dhar: I mean, we assume that inertia is the…
S15
Building Public Interest AI Catalytic Funding for Equitable Compute Access — – Dr. Saurabh Garg- Martin Tisné – Martin Tisné- Dr. Shikha Gitao- Vilas Dhar
S16
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Speakers:Dr. Saurabh Garg, Martin Tisné Speakers:Dr. Saurabh Garg, Martin Tisné, Shaun Seow, Vilas Dhar Speakers:Marti…
S17
Building Public Interest AI Catalytic Funding for Equitable Compute Access — -Andrew Sweet- VP at the Rockefeller Foundation, served as moderator for the panel discussion
S18
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Agreed with:Andrew Sweet, Sushant Kumar — Need for practical, quantified approaches rather than abstract discussions Ag…
S19
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs…
S20
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Respected Honorable Chairman, Distinguished Speakers, Eminent Guests, Colleagues and Participants. It is my privilege to…
S21
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosyst…
S22
Connecting open code with policymakers to development | IGF 2023 WS #500 — Conversely, the potential negative effects of open source were also discussed. The speakers raised concerns regarding th…
S23
WS #257 Data for Impact Equitable Sustainable DPI Data Governance — Malik Payal: Thank you, Priya. And yes, as you mentioned about that T20 policy brief, which we did, it was really great …
S24
How AI Drives Innovation and Economic Growth — Summary:The speakers show broad agreement on AI’s transformative potential for development but significant disagreements…
S25
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — A breakthrough with immense promise but serious risks in the wrong hands. Democracy rests not on the rule of the most le…
S26
How AI Drives Innovation and Economic Growth — The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on impl…
S27
Keynote-Jeet Adani — Overall Tone:The tone was consistently aspirational, patriotic, and strategic throughout. Jeet Adani maintained a confid…
S28
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Summary:Sunil emphasizes GPU compute infrastructure as the primary bottleneck, while Kalyan argues that data platforms a…
S29
The Foundation of AI Democratizing Compute Data Infrastructure — Compute intensity, model scaling and hardware outlook LeCun notes that current CMOS‑based hardware is a bottleneck and …
S30
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Sunil emphasizes GPU compute infrastructure as the primary bottleneck, while Kalyan argues that data platforms and vecto…
S31
WS #462 Bridging the Compute Divide a Global Alliance for AI — Fabro Steibel: So, hello everyone, welcome to the panel. If you are online, welcome. If you are here in front of us, wel…
S32
Building Public Interest AI Catalytic Funding for Equitable Compute Access — All speakers agree that focusing solely on compute infrastructure without addressing the broader ecosystem (talent, gove…
S33
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Investment readiness (talent, power, governance, use cases) is as important as compute infrastructure to avoid creating …
S34
The Foundation of AI Democratizing Compute Data Infrastructure — Combination of technological and policy-based mechanisms to prevent new dependencies while enabling collaboration Inves…
S35
Building Climate-Resilient Systems with AI — Consensus level:Very high level of consensus with no significant disagreements identified. This strong alignment suggest…
S36
Building Scalable AI Through Global South Partnerships — Summary:The speakers demonstrated strong consensus on the need for government partnership, South-South collaboration, di…
S37
WS #484 Innovative Regulatory Strategies to Digital Inclusion — The disagreements are substantial enough to potentially impact policy coordination and resource allocation, particularly…
S38
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — On a positive note, the potential for South-South cooperation and shared learning experiences in the field of DPI was hi…
S39
Panel Discussion Data Sovereignty India AI Impact Summit — “So I think the takeaway is that as far as the infrastructure layer is concerned, as in sovereignty in compute is not on…
S40
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S41
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Demands on policy exist without the building blocks to support its implementation Factors such as restricted access to …
S42
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — However, Excellencies, we recognize that policy alone is insufficient. For our journey to succeed, we are focusing on th…
S43
How nonprofits are using AI-based innovations to scale their impact — This comment helped establish a key takeaway for the nonprofit audience and shifted the conversation toward practical im…
S44
Indias AI Leap Policy to Practice with AIP2 — Voluntary ethical frameworks alone are insufficient without clear governance and regulatory deliverables
S45
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S46
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Summary:There is unanimous agreement that power and energy constraints represent fundamental challenges that must be add…
S47
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Energy management is crucial as energy resources are finite, with strong environmental implications India faces physica…
S48
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Evidence:References to access to GPUs, cloud capacity, and scalable compute as the key constraints
S49
Building Public Interest AI Catalytic Funding for Equitable Compute Access — India is building one of the world’s most ambitious public interest compute ecosystems with 38,000 GPUs as public infras…
S50
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosyst…
S51
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Infrastructure and Compute Requirements for Sovereign AI: The panel extensively discussed India’s need for massive GPU i…
S52
Digital Public Infrastructure, Policy Harmonization, and Digital Cooperation — – Nasir Shinkafi- Salisu Kaka Marie Ndé Sene Ahouantchede explains that ECOWAS views public digital infrastructure as b…
S53
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Chris Albon:Yeah. So Wikipedia right now, since I’m in charge of this at Wikipedia, we have over 400 machine learning mo…
S54
Main Session 1: Global Access, Global Progress: Managing the Challenges of Global Digital Adoption — Shivnath Thukra: Thanks to you and thanks for inviting me, Meta from India on this panel. I will, in the spirit of bein…
S55
https://dig.watch/event/india-ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — India is proving that you can design AI ecosystems that are both globally competitive and globally competitive. And loca…
S56
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — A breakthrough with immense promise but serious risks in the wrong hands. Democracy rests not on the rule of the most le…
S57
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S58
Responsible AI for Children Safe Playful and Empowering Learning — This comment provides a counterintuitive response to the pressure of rapid AI adoption, advocating for deliberate pause …
S59
AI and the future of work: Global forum highlights risks, promise, and urgent choices — At the20th Internet Governance Forum held in Lillestrøm, Norway, global leaders, industry experts, and creatives gathere…
S60
Keynote-Jeet Adani — Overall Tone:The tone was consistently aspirational, patriotic, and strategic throughout. Jeet Adani maintained a confid…
S61
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — These key comments fundamentally shaped the discussion by progressively expanding its scope and depth. The conversation …
S62
The Dawn of Artificial General Intelligence? / DAVOS 2025 — The tone of the discussion was primarily intellectual and analytical, with panelists presenting reasoned arguments for t…
S63
WS #257 Emerging Norms for Digital Public Infrastructure — The tone of the discussion was largely analytical and academic, with panelists offering nuanced views based on their exp…
S64
WS #106 Promoting Responsible Internet Practices in Infrastructure — Both speakers address development needs, but Lawrence focuses on the practical infrastructure hardware challenges and cr…
S65
WS #43 States and Digital Sovereignty: Infrastructural Challenges — The tone of the discussion was largely analytical and informative, with speakers presenting research and case studies fr…
S66
WS #226 Strengthening Multistakeholder Participation — The discussion maintained a collaborative and constructive tone throughout, with participants openly acknowledging chall…
S67
Charting New Horizons: Gender Equality in Supply Chains – Challenges and Opportunities — A conscientious request for clarity and specificity was also apparent, underlining the need for concrete, actionable pla…
S68
WS #305 Financing Self Sustaining Community Connectivity Solutions — The tone was consistently professional, collaborative, and optimistic throughout. Speakers demonstrated deep expertise w…
S69
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S70
AI for equality: Bridging the innovation gap — The conversation maintained a consistently optimistic yet realistic tone throughout. Both speakers demonstrated enthusia…
S71
Welcome remarks | 30 May — Disparities exist in access to data, algorithms, computing power, and expertise.
S72
Business Engagement Session: Sustainable Leadership in the Digital Age – Shaping the Future of Business — As a closing message, Moret emphasizes that leaders must use their positions of privilege to ensure not just representat…
S73
The International Observatory on Information and Democracy | IGF 2023 Town Hall #128 — She emphasizes the need for different approaches based on the varying cultural, governance, regulatory capacity contexts
S74
Opening Ceremony — Nandini Chami argued that the current digital order has democratic deficits that actively harm marginalized communities …
S75
IGF Intersessional Work Session: DC — Esfandiari argued that core internet values serve as anchors during disruption, requiring active updating and applicatio…
S76
WS #103 Aligning strategies, protecting critical infrastructure — – The need to move from high-level discussions to concrete, actionable measures
S77
Knowledge Café: WSIS+20 Consultation: Strenghtening Multistakeholderism — This observation grounded the discussion in practical realities and influenced subsequent conversations about the need f…
S78
Saturday Closing Ceremony: Summit of the Future Action Days — It set an inspiring and action-oriented tone for the discussion, emphasizing the need for concrete steps rather than jus…
S79
Agentic AI in Focus Opportunities Risks and Governance — Regulators should supply concrete, operational guidance rather than high‑level theoretical frameworks.
S80
Collaborative AI Network – Strengthening Skills Research and Innovation — Thank you. Good afternoon and great to be here on this session. We’re talking of diffusion, AI diffusion. I’ll just spea…
S81
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Chunggong acknowledges the significant positive potential of AI for social good, including improvements in healthcare de…
S82
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S83
AI Governance Dialogue: Steering the future of AI — Martin argues that this transformative moment demands inclusive, forward-looking governance that drives innovation while…
S84
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider — Thomas Schneider delivered a keynote address at the AI Impact Summit in Delhi, announcing Switzerland’s role as host of …
S85
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — We are committed to work together on this through knowledge sharing, co -operation and collaboration. creation and capac…
S86
Scaling Enterprise-Grade Responsible AI Across the Global South — I think it has been a fantastic week here in Delhi participating in the AI Impact Summit. And I’ll just go back to the t…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Deepali Khanna
3 arguments148 words per minute649 words262 seconds
Argument 1
Compute divide & India’s public‑interest GPU ecosystem
EXPLANATION
Deepali argues that the digital divide is evolving into a compute divide, where access to GPUs and cloud capacity determines who can shape AI’s future. She highlights India’s ambitious public‑interest effort to mobilise over 38,000 GPUs, creating a sovereign yet open compute ecosystem for the Global South.
EVIDENCE
She notes that AI is constrained by infrastructure and access to GPUs, describing the compute divide and its impact on future AI leadership [4-6]. She emphasizes India’s proactive stance, stating that India is not waiting for permission and is building one of the most ambitious public-interest compute ecosystems with more than 38,000 GPUs as public infrastructure [13-15][14]. She frames this as large-scale, sovereign capability combined with openness for global competitiveness [16-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion frames the digital divide as a compute divide and cites India’s public‑interest GPU rollout of 38,000 units (S1) and plans for up to 60,000 GPUs (S19), confirming the scale of the effort; the compute‑divide framing is also highlighted in S7.
MAJOR DISCUSSION POINT
Democratizing Access to Compute Infrastructure
AGREED WITH
Dr. Saurabh Garg
DISAGREED WITH
Dr. Saurabh Garg, Martin Tisné
Argument 2
Philanthropy’s catalytic role to reduce risk, unlock capital, and convene partnerships
EXPLANATION
Deepali describes philanthropy as a catalyst that can lower risk, unlock capital, and bring together unlikely partners to accelerate equitable AI progress. She positions this role as essential for moving from diagnosis to action in democratizing AI.
EVIDENCE
She states, “Philanthropy’s role is to be catalytic, to reduce risk, unlock capital, and convene unlikely partnerships that accelerate progress” [23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Philanthropy’s catalytic function—reducing risk, unlocking capital and convening partners—is explicitly described in S1 and reiterated in S7.
MAJOR DISCUSSION POINT
Institutional and Financing Mechanisms for Public‑Interest AI
AGREED WITH
Martin Tisné, Shaun Seow, Vilas Dhar
Argument 3
Democratization of AI is operational, scalable, and already underway
EXPLANATION
Deepali stresses that democratizing AI is not a theoretical concept but a concrete, operational effort that is already being implemented at scale, moving beyond diagnosis to action.
EVIDENCE
She states that democratization is operational, scalable, and already underway, framing it as a practical reality rather than a future ideal [30-32].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from diagnosis to action, indicating that democratization is already operational and scalable, is noted in S1; S7 also emphasizes moving beyond theory to concrete implementation.
MAJOR DISCUSSION POINT
Democratizing Access to Compute Infrastructure
D
Dr. Saurabh Garg
6 arguments126 words per minute1172 words555 seconds
Argument 1
Intelligent prioritization over rationing of compute
EXPLANATION
Garg argues that the focus should be on intelligent prioritisation rather than strict rationing, treating compute as an enabling platform and a digital public good to ensure equitable access for all innovators.
EVIDENCE
He says, “the focus is not on rationing but on intelligent prioritization… that the impute capacity is an enabling platform, and as a digital public good” [109-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Garg’s call for intelligent prioritisation rather than strict rationing of compute resources is documented in S7 and echoed in S1.
MAJOR DISCUSSION POINT
Democratizing Access to Compute Infrastructure
Argument 2
Maitri platform as a modular digital public good for shared compute
EXPLANATION
Garg introduces the collaborative platform “Maitri” (Multi‑Stakeholder AI for Trusted and Resilient Infrastructure) as a voluntary, modular digital public good that countries can adopt, customise and build upon to expand shared access to compute, data and models.
EVIDENCE
He describes the platform, its acronym and its design as a digital public good that can be customised by countries, noting it is non-binding, voluntary and modular [76-79].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Maitri platform is presented as a voluntary, modular digital public good for shared compute in both S7 and S1.
MAJOR DISCUSSION POINT
Democratizing Access to Compute Infrastructure
Argument 3
Governance frameworks must be robust yet flexible for diverse social contexts
EXPLANATION
Garg stresses that AI governance frameworks need to be strong enough to build trust while remaining adaptable to varied cultural and social settings, and that open‑source modular stacks can enable localisation without creating dependency.
EVIDENCE
He notes that “The governance framework needs to be robust enough to build trust, yet flexible enough to adapt to diverse social and cultural contexts” and mentions open-source modular AI stacks to support localisation [72-74].
MAJOR DISCUSSION POINT
Data, Open‑Source, and Sovereignty in AI Development
Argument 4
Future AI models may become smaller, domain‑specific, lowering compute demand
EXPLANATION
Garg suggests that AI may shift from large, compute‑intensive models to smaller, domain‑specific models, which could alleviate the current compute bottleneck and change the focus of democratisation efforts.
EVIDENCE
He references Vishal Sikka’s remark comparing compute measured in gigawatts to human energy needs and argues that focusing on smaller, domain-specific models could solve many challenges, indicating a possible shift away from massive compute requirements [84-89].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Garg’s suggestion to focus on smaller, domain‑specific models that require less power is supported by the discussion in S3 and the recommendation in S4.
MAJOR DISCUSSION POINT
Future Directions – Model Efficiency, Energy, and the Role of Compute
Argument 5
Six foundational pillars (compute, capability, collaboration, connectivity, compliance, context) underpin the roadmap for democratizing AI resources
EXPLANATION
Garg outlines a structured framework consisting of six inter‑related pillars that together address the technical, institutional, and societal dimensions needed to make AI resources widely accessible.
EVIDENCE
He describes the six pillars that emerged from working-group discussions-compute, capability, collaboration, connectivity, compliance and context-and briefly explains each as essential to the collective roadmap [68-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The six‑pillar framework is outlined in S1 as the core roadmap for democratising AI resources.
MAJOR DISCUSSION POINT
Democratizing Access to Compute Infrastructure
Argument 6
Improving utilization rates and reducing transaction costs can lower barriers to affordable compute
EXPLANATION
Garg argues that making existing compute assets more efficiently used and simplifying the transaction process are key mechanisms to make compute resources cheaper and more widely available.
EVIDENCE
He notes that mechanisms to facilitate accessible and affordable computing resources include improving utilization rates and reducing transaction costs, thereby lowering access barriers regardless of geography [75-76].
MAJOR DISCUSSION POINT
Democratizing Access to Compute Infrastructure
D
Dr. Shikha Gitao
3 arguments174 words per minute1259 words432 seconds
Argument 1
Compute demand and investment‑readiness indices to quantify needs
EXPLANATION
Shikha explains that her team has created a Compute Demand Index and an AI Investment Readiness Index to provide concrete, quantitative data on GPU hour requirements and a country’s capacity to deploy compute, enabling practical discussions with donors and partners.
EVIDENCE
She outlines the two indices, noting that Africa needs 2.5 million GPU hours a year (7.5 million over three years) and currently has only 5 % of that capacity, providing specific numbers such as 125 000 GPU hours per year and highlighting conversations with UNDP and Cassava about donating GPU hours [221-246][233-246].
MAJOR DISCUSSION POINT
Democratizing Access to Compute Infrastructure
Argument 2
Compute must be tied to concrete local use‑cases, not just hardware
EXPLANATION
She argues that providing GPUs alone is insufficient; compute must serve specific sectors—health, education, agriculture—and be matched with talent, data, and clear use‑cases to avoid wasted investments.
EVIDENCE
She states that without talent, power, data, models and use-cases, giving GPUs “wastes money,” and illustrates the need for clear local applications before GPU demand can be articulated, citing examples from health, education and agriculture [276-298].
MAJOR DISCUSSION POINT
Data, Open‑Source, and Sovereignty in AI Development
AGREED WITH
Shaun Seow
Argument 3
South‑South collaboration models, such as India‑Africa partnerships, are essential to meet compute demand in the Global South
EXPLANATION
Shikha highlights that coordinated partnerships between Global South countries can pool resources, align use‑cases, and collectively address the massive GPU‑hour shortfall identified for Africa.
EVIDENCE
She discusses the need for South-South collaboration, referencing India’s capacity and asking how Africa can obtain GPU hours, and notes ongoing dialogues with UNDP and private donors to bridge the gap [252-256].
MAJOR DISCUSSION POINT
Institutional and Financing Mechanisms for Public‑Interest AI
AGREED WITH
Vilas Dhar, Andrew Sweet, Shaun Seow
S
Shaun Seow
3 arguments159 words per minute576 words217 seconds
Argument 1
Physical latency limits hinder direct cross‑country compute sharing
EXPLANATION
Seow points out that geographic distance creates latency (50‑100 ms for 10,000 km) that makes real‑time sharing of compute between countries such as India and Indonesia impractical, despite lower energy costs and data‑residency constraints.
EVIDENCE
He explains that latency of 10,000 km results in 50-100 ms, which “doesn’t work” for sharing compute, and adds that data sovereignty and privacy further limit such arrangements [325-328].
MAJOR DISCUSSION POINT
Democratizing Access to Compute Infrastructure
DISAGREED WITH
Andrew Sweet
Argument 2
Energy consumption is the deeper bottleneck; compute ownership less critical than access
EXPLANATION
Seow argues that the primary limitation is energy availability rather than who owns the GPUs, and that making compute more accessible and affordable for startups and impact organisations is the key to unlocking AI’s potential.
EVIDENCE
He notes “the stumbling block is probably energy at a bottom level” and highlights the skewed ownership of GPUs (75 % US, 15 % China, 0.1 % elsewhere) while emphasizing the need for access, demand aggregation and philanthropic subsidies to lower costs [314-321][322-329].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session highlights energy as a fundamental bottleneck, comparing AI’s gigawatt needs to human cognition’s 100‑watt scale, underscoring the point made in S7.
MAJOR DISCUSSION POINT
Future Directions – Model Efficiency, Energy, and the Role of Compute
AGREED WITH
Dr. Shikha Gitao
DISAGREED WITH
Deepali Khanna, Dr. Saurabh Garg
Argument 3
Aggregating demand and using philanthropic subsidies can lower compute costs for startups and impact organisations
EXPLANATION
Shaun proposes that collective demand aggregation enables better negotiation with cloud providers, while philanthropy can subsidise remaining expenses, thereby expanding affordable compute access for mission‑driven actors.
EVIDENCE
He suggests that aggregating demand to negotiate cheaper pricing with cloud providers and involving philanthropy to subsidise compute costs would make the technology more accessible for startups and impact organisations [332-334].
MAJOR DISCUSSION POINT
Institutional and Financing Mechanisms for Public‑Interest AI
M
Martin Tisné
4 arguments183 words per minute1162 words379 seconds
Argument 1
Data bottleneck and need for scalable data‑stewardship mechanisms
EXPLANATION
Martin stresses that without contextual, language‑specific data, compute capacity cannot produce locally relevant AI, and calls for scalable data‑stewardship mechanisms such as data trusts to enable responsible data sharing.
EVIDENCE
He highlights the need for contextual data in local languages and mentions discussions about data stewardship, data trusts and other mechanisms to manage data responsibly [128-132][151-152].
MAJOR DISCUSSION POINT
Data, Open‑Source, and Sovereignty in AI Development
AGREED WITH
Dr. Shikha Gitao, Vilas Dhar, Shaun Seow
Argument 2
Open‑source ecosystem under‑funded; philanthropic support needed for critical dependencies
EXPLANATION
Martin observes that while top‑tier open‑source projects receive funding from large corporations, the critical lower‑tier dependencies rely on volunteers and are under‑funded, requiring philanthropic investment to sustain them.
EVIDENCE
He describes the funding landscape where “the top tier of open software is funded by large companies… the bottom tier… run on a shoestring by volunteers” and notes that only a few organisations, including one from Current AI, fund critical dependencies, calling for more philanthropic support [134-138].
MAJOR DISCUSSION POINT
Data, Open‑Source, and Sovereignty in AI Development
AGREED WITH
Deepali Khanna, Shaun Seow, Vilas Dhar
Argument 3
Overemphasis on compute; focus should shift to data and open‑source ecosystems
EXPLANATION
Martin warns that the community’s focus on compute overlooks the larger challenges of data availability and under‑investment in open‑source tools, urging a shift of resources toward these areas to achieve meaningful AI democratization.
EVIDENCE
He states that while compute is critical, “we need to talk about the data piece and the open source one,” and calls the lack of innovation in data a tragedy, emphasizing the need for more resources and thought on data and open-source [122-131][147-150].
MAJOR DISCUSSION POINT
Future Directions – Model Efficiency, Energy, and the Role of Compute
AGREED WITH
Dr. Shikha Gitao, Vilas Dhar, Shaun Seow
DISAGREED WITH
Deepali Khanna, Dr. Saurabh Garg
Argument 4
Data centres risk becoming under‑utilised ‘white‑elephant’ assets if compute is not paired with contextual data and use‑cases
EXPLANATION
Martin warns that without sufficient local data and relevant applications, newly built data centres may remain largely idle, undermining the intended benefits of compute investments.
EVIDENCE
He observes that many countries have data centres that are effectively ‘white elephants’ because they are not used close to full capacity, emphasizing the need for contextual data and AI applications [126-128].
MAJOR DISCUSSION POINT
Data, Open‑Source, and Sovereignty in AI Development
S
Sushant Kumar
1 argument78 words per minute278 words212 seconds
Argument 1
Report release to catalyze collaborative work and gather feedback
EXPLANATION
Sushant announces the publication of a working version of the report on opening computational resources, inviting inputs, feedback and suggestions over the coming months to refine the collaborative effort.
EVIDENCE
He says, “today is an opportunity when we release a working version of that report and invite inputs, feedback, comments, and suggestions, which we will work through over the next few months” [44-45].
MAJOR DISCUSSION POINT
Institutional and Financing Mechanisms for Public‑Interest AI
V
Vilas Dhar
3 arguments204 words per minute1556 words456 seconds
Argument 1
Need for new institutional intermediaries linking government, philanthropy, and innovators
EXPLANATION
Vilas calls for the creation of new institutional frameworks and intermediaries—such as Culpa Impact—that can connect governments, philanthropies and innovators, providing technical, policy and funding support to scale public‑interest AI initiatives.
EVIDENCE
He mentions “new institutional set of intermediaries… groups like Culpa Impact… combine technical sophistication, policy impact, support for government” as examples of the needed intermediaries [188-190] and references broader calls for participatory institutional frameworks [163-170].
MAJOR DISCUSSION POINT
Institutional and Financing Mechanisms for Public‑Interest AI
AGREED WITH
Deepali Khanna, Martin Tisné, Shaun Seow
Argument 2
Active impact model vs. trickle‑down; interdependence over sovereignty
EXPLANATION
Vilas critiques the trickle‑down view of AI diffusion, advocating instead for an active impact model where compute is deliberately directed toward local problems, fostering interdependence rather than isolated sovereign capacity.
EVIDENCE
He describes AI diffusion as “a passive concept… like trickle-down economics” and argues for an active impact model that requires institutions to connect compute to concrete outcomes, emphasizing interdependence over sovereignty [179-186][349-353].
MAJOR DISCUSSION POINT
Institutional and Financing Mechanisms for Public‑Interest AI
AGREED WITH
Dr. Shikha Gitao, Andrew Sweet, Shaun Seow
Argument 3
AI diffusion should be framed as an active impact model based on interdependence rather than a passive, Westphalian sovereignty approach
EXPLANATION
Vilas critiques the traditional notion of sovereign, isolated compute capacity and advocates for a model where countries cooperate, sharing resources and outcomes to achieve mutual development goals.
EVIDENCE
He critiques the Westphalian concept of sovereignty and argues for an interdependence model where AI diffusion is an active, impact-driven process rather than a passive trickle-down effect [165-170].
MAJOR DISCUSSION POINT
Institutional and Financing Mechanisms for Public‑Interest AI
DISAGREED WITH
Deepali Khanna
A
Andrew Sweet
4 arguments108 words per minute1001 words551 seconds
Argument 1
Nations should transition from AI consumers to genuine co‑creators
EXPLANATION
Andrew emphasizes that countries need to move beyond merely using AI technologies to actively participating in their development and governance, fostering local innovation and ownership.
EVIDENCE
During his opening of the panel, he asks Martin how to move nations from being consumers to genuine co-creators and raises the issue of unlocking data sets without compromising privacy [113-117].
MAJOR DISCUSSION POINT
Institutional and Financing Mechanisms for Public‑Interest AI
Argument 2
India’s AI mission can serve as a public‑interest compute playbook, akin to the IPL model
EXPLANATION
Andrew suggests that the Indian AI mission, with its large‑scale GPU deployment, could provide a template for building world‑class public‑interest compute institutions elsewhere, similar to how the Indian Premier League transformed sports governance.
EVIDENCE
He asks Vilas whether there is a comparable IPL playbook for public-interest compute, referencing the Indian AI mission’s 38,000 GPUs as a potential model [154-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s ambitious GPU deployment (38,000‑60,000 GPUs) is presented as a potential public‑interest compute playbook in S1 and S19, aligning with the IPL analogy.
MAJOR DISCUSSION POINT
Institutional and Financing Mechanisms for Public‑Interest AI
AGREED WITH
Dr. Shikha Gitao, Vilas Dhar, Shaun Seow
Argument 3
Asian philanthropic networks should coordinate shared compute infrastructure rather than compete
EXPLANATION
Andrew proposes that philanthropy across Asia could pool resources and jointly manage compute capacity, creating collaborative platforms that avoid duplication and foster regional solidarity.
EVIDENCE
He frames a question to Shaun about whether Asia’s philanthropic networks can coordinate shared compute and infrastructure, pulling resources from India, Indonesia and other nations instead of competing [305-307].
MAJOR DISCUSSION POINT
Institutional and Financing Mechanisms for Public‑Interest AI
AGREED WITH
Dr. Shikha Gitao, Vilas Dhar, Shaun Seow
DISAGREED WITH
Shaun Seow
Argument 4
Aggregating demand and leveraging philanthropic subsidies can make compute affordable for startups and impact organisations
EXPLANATION
Andrew highlights that by consolidating demand across multiple actors, the sector can negotiate better pricing with cloud providers, while philanthropic funding can subsidise the remaining costs, expanding access for mission‑driven entities.
EVIDENCE
He asks how to make compute more accessible for startups and impact organisations by aggregating demand and involving philanthropy to subsidise costs [332-334].
MAJOR DISCUSSION POINT
Institutional and Financing Mechanisms for Public‑Interest AI
AGREED WITH
Deepali Khanna, Martin Tisné, Shaun Seow, Vilas Dhar
Agreements
Agreement Points
Democratizing access to compute infrastructure is essential and should focus on intelligent prioritisation rather than strict rationing
Speakers: Deepali Khanna, Dr. Saurabh Garg
Compute divide & India’s public‑interest GPU ecosystem Intelligent prioritisation over rationing of compute
Both speakers stress that the emerging compute divide must be addressed by expanding equitable access to GPUs and other resources, with Garg emphasizing intelligent prioritisation instead of rationing, echoing Deepali’s call for operational democratisation of AI compute [4-6][13-15][109-110].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the equitable compute access frameworks advocated in catalytic funding initiatives, which stress intelligent prioritisation over rigid rationing to maximise impact [S32][S33][S34].
Philanthropy should play a catalytic role, reducing risk, unlocking capital and linking stakeholders to accelerate public‑interest AI
Speakers: Deepali Khanna, Martin Tisné, Shaun Seow, Vilas Dhar
Philanthropy’s catalytic role to reduce risk, unlock capital, and convene partnerships Open‑source ecosystem under‑funded; philanthropic support needed for critical dependencies Aggregating demand and leveraging philanthropic subsidies can make compute affordable for startups and impact organisations Need for new institutional intermediaries linking government, philanthropy, and innovators
All four speakers highlight philanthropy as a key enabler: Deepali describes its catalytic function, Martin points to under-funded open-source layers needing philanthropic backing, Shaun proposes demand aggregation and subsidies, and Vilas calls for new intermediaries to connect governments, philanthropies and innovators [23][134-138][332-334][188-190].
POLICY CONTEXT (KNOWLEDGE BASE)
The role of philanthropic catalytic funding for public-interest AI is highlighted in recent policy discussions on equitable compute access and risk mitigation [S32].
Compute alone is insufficient; data, contextual relevance and clear use‑cases are required for meaningful AI impact
Speakers: Martin Tisné, Dr. Shikha Gitao, Vilas Dhar, Shaun Seow
Data bottleneck and need for scalable data‑stewardship mechanisms Compute must be tied to concrete local use‑cases, not just hardware Active impact model vs. trickle‑down; interdependence over sovereignty Overemphasis on compute; focus should shift to data and open‑source ecosystems
The panelists converge on the view that without contextual data and defined applications, compute resources become under-utilised; Martin stresses data stewardship, Shikha links GPUs to sectoral use-cases, Vilas critiques passive AI diffusion, and Shaun warns against over-focusing on hardware alone [128-132][147-150][276-298][179-186][122-131].
POLICY CONTEXT (KNOWLEDGE BASE)
Consensus that data platforms, talent, and concrete use-cases are as critical as GPU compute appears in Indian AI strategy debates and public-interest AI panels [S28][S32][S42].
South‑South collaboration and inter‑regional partnerships are crucial to meet compute demand and build capacity in the Global South
Speakers: Dr. Shikha Gitao, Vilas Dhar, Andrew Sweet, Shaun Seow
South‑South collaboration models, such as India‑Africa partnerships, are essential to meet compute demand in the Global South Active impact model vs. trickle‑down; interdependence over sovereignty India’s AI mission can serve as a public‑interest compute playbook, akin to the IPL model Asian philanthropic networks should coordinate shared compute infrastructure rather than compete
All four emphasize collaborative models across Global South nations: Shikha outlines concrete South-South demand-matching, Vilas advocates interdependence, Andrew proposes using India’s AI mission as a template, and Shaun suggests regional philanthropic coordination to pool compute resources [252-256][349-353][154-158][305-307].
POLICY CONTEXT (KNOWLEDGE BASE)
Strong consensus on South-South partnerships for AI scaling and digital public infrastructure is documented in global alliance reports and regional cooperation studies [S36][S38][S31].
Energy availability and power infrastructure are fundamental constraints that must be addressed alongside compute provision
Speakers: Shaun Seow, Dr. Shikha Gitao
Energy consumption is the deeper bottleneck; compute ownership less critical than access Compute must be tied to concrete local use‑cases, not just hardware
Shaun highlights energy and latency as primary limits to compute sharing, while Shikha stresses that without reliable power and talent, GPU donations are wasted, underscoring the need to consider energy infrastructure in compute strategies [314-321][274-283].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses on heterogeneous compute emphasize that power and energy constraints are core considerations for democratizing AI access [S46][S47].
Similar Viewpoints
Both argue that democratising AI is a concrete, ongoing effort that should prioritize intelligent allocation of compute resources rather than restrictive rationing [30-32][109-110].
Speakers: Deepali Khanna, Dr. Saurabh Garg
Democratization of AI is operational, scalable, and already underway Intelligent prioritisation over rationing of compute
Both stress that compute investments must be coupled with contextual data and clear application domains to avoid under‑utilisation [128-132][276-298].
Speakers: Martin Tisné, Dr. Shikha Gitao
Data bottleneck and need for scalable data‑stewardship mechanisms Compute must be tied to concrete local use‑cases, not just hardware
Both caution that focusing solely on hardware overlooks deeper constraints—data availability and energy—necessary for effective AI deployment [122-131][314-321].
Speakers: Martin Tisné, Shaun Seow
Overemphasis on compute; focus should shift to data and open‑source ecosystems Energy consumption is the deeper bottleneck; compute ownership less critical than access
Both promote an active, interdependent model of AI diffusion, using India’s mission as a template for building collaborative institutions rather than isolated sovereign capacity [179-186][154-158].
Speakers: Vilas Dhar, Andrew Sweet
Active impact model vs. trickle‑down; interdependence over sovereignty India’s AI mission can serve as a public‑interest compute playbook, akin to the IPL model
Both see the creation of intermediary structures and demand aggregation as essential mechanisms to lower compute costs and broaden access for mission‑driven actors [332-334][188-190].
Speakers: Shaun Seow, Vilas Dhar
Aggregating demand and leveraging philanthropic subsidies can make compute affordable for startups and impact organisations Need for new institutional intermediaries linking government, philanthropy, and innovators
Unexpected Consensus
Agreement that large‑scale compute infrastructure risks becoming ‘white‑elephant’ assets unless paired with data, talent and use‑cases
Speakers: Martin Tisné, Dr. Shikha Gitao, Vilas Dhar, Shaun Seow
Data centres risk becoming under‑utilised ‘white‑elephant’ assets if compute is not paired with contextual data and use‑cases Compute must be tied to concrete local use‑cases, not just hardware Active impact model vs. trickle‑down; interdependence over sovereignty Overemphasis on compute; focus should shift to data and open‑source ecosystems
While many participants initially emphasized hardware provision, they unexpectedly converged on the risk of idle infrastructure and the necessity of coupling compute with data, talent and clear applications-a point not foregrounded in the opening remarks [126-128][276-298][179-186][122-131].
POLICY CONTEXT (KNOWLEDGE BASE)
Public-interest AI funding discussions warn that compute-only investments can become under-utilised “white-elephant” assets without supporting data, talent, and use-case ecosystems [S32][S33][S45].
Consensus that South‑South partnerships can operationalise compute demand indices and investment readiness metrics
Speakers: Dr. Shikha Gitao, Vilas Dhar, Andrew Sweet, Shaun Seow
South‑South collaboration models, such as India‑Africa partnerships, are essential to meet compute demand in the Global South Active impact model vs. trickle‑down; interdependence over sovereignty India’s AI mission can serve as a public‑interest compute playbook, akin to the IPL model Asian philanthropic networks should coordinate shared compute infrastructure rather than compete
The alignment of Shikha’s quantitative demand/readiness indices with Vilas’s call for interdependence, Andrew’s IPL analogy, and Shaun’s regional coordination proposal reveals an unexpected unified vision for using metrics to drive South-South resource sharing [252-256][349-353][154-158][305-307].
POLICY CONTEXT (KNOWLEDGE BASE)
Frameworks for Global South collaboration link compute demand indices with investment readiness and talent development metrics, reinforcing this consensus [S36][S33].
Overall Assessment

The panel shows strong convergence on four core themes: (1) the need to democratise compute access through intelligent prioritisation; (2) the pivotal catalytic role of philanthropy and new intermediary institutions; (3) the necessity of pairing compute with data, talent and concrete use‑cases to avoid under‑utilised infrastructure; and (4) the importance of South‑South collaboration and coordinated metrics to scale public‑interest AI.

High consensus across speakers, cutting across geographic and sectoral lines, indicating a shared understanding that technical resources must be embedded within institutional, data‑centric and collaborative frameworks to achieve equitable AI outcomes.

Differences
Different Viewpoints
Priority of compute infrastructure versus data and open‑source ecosystems
Speakers: Deepali Khanna, Dr. Saurabh Garg, Martin Tisné
Compute divide & India’s public‑interest GPU ecosystem Overemphasis on compute; focus should shift to data and open‑source ecosystems
Deepali stresses that the digital divide has become a compute divide and that scaling public-interest GPU infrastructure (38,000 GPUs) is essential to democratise AI [4-6][13-15]. Dr. Garg also frames compute as the defining barrier and proposes intelligent prioritisation of shared resources [68-71]. Martin counters that focusing mainly on compute risks creating “white-elephant” data centres and argues that without contextual data and robust open-source tools, compute investments will not yield local AI outcomes [122-131][147-150].
POLICY CONTEXT (KNOWLEDGE BASE)
Debate mirrors findings that data platforms and open-source tools are equally critical to compute hardware in AI development strategies [S28][S32].
Feasibility of cross‑border compute sharing versus latency and energy constraints
Speakers: Andrew Sweet, Shaun Seow
Asian philanthropic networks should coordinate shared compute infrastructure rather than compete Physical latency limits hinder direct cross‑country compute sharing Energy consumption is the deeper bottleneck; compute ownership less critical than access
Andrew proposes that Asian philanthropy pool resources to create shared compute capacity across countries such as India and Indonesia [305-307]. Shaun argues that geographic distance creates latency of 50-100 ms for 10,000 km, making real-time sharing impractical, and that energy availability, not ownership, is the core limitation [325-329][314-321].
POLICY CONTEXT (KNOWLEDGE BASE)
Challenges of cross-regional compute sharing due to energy limits and latency are highlighted in discussions on foundational compute resource sharing [S45][S46].
Sovereign‑centric compute model versus interdependent, collaborative approach
Speakers: Deepali Khanna, Vilas Dhar
This is sovereign capability combined with openness AI diffusion should be framed as an active impact model based on interdependence rather than a passive, Westphalian sovereignty approach
Deepali frames India’s AI mission as a sovereign capability that remains open, positioning public-interest compute as a national asset [16-18]. Vilas critiques the Westphalian notion of sovereignty, arguing that AI diffusion must be an active, interdependent process that avoids trickle-down economics and instead builds shared institutions [165-170][349-353].
POLICY CONTEXT (KNOWLEDGE BASE)
Tension between national compute sovereignty and global collaborative alliances is evident in policy dialogues on data sovereignty and AI infrastructure [S39][S31][S36].
Primary bottleneck: compute hardware versus energy supply
Speakers: Deepali Khanna, Dr. Saurabh Garg, Shaun Seow
AI today is constrained by infrastructure, by who has access to GPUs Compute is today’s defining barrier Energy consumption is the deeper bottleneck; compute ownership less critical than access
Both Deepali and Dr. Garg identify lack of scalable GPU compute as the main obstacle to AI democratisation [4-6][68-71]. Shaun counters that the fundamental limitation is energy availability, noting that the bulk of compute capacity is concentrated in a few regions but that access, not ownership, is the key issue, and that energy constraints may be more decisive [314-321].
POLICY CONTEXT (KNOWLEDGE BASE)
National strategies differ, with some emphasizing GPU compute as the main bottleneck while others point to energy supply constraints as equally limiting [S28][S46].
Approach to AI diffusion: passive trickle‑down versus active impact‑driven model
Speakers: Vilas Dhar, Deepali Khanna
AI diffusion should be framed as an active impact model based on interdependence rather than a passive, Westphalian sovereignty approach Democratization is operational, scalable, and already underway
Vilas warns that treating AI diffusion as a passive trickle-down process is ineffective and advocates an active model that directly links compute to local problems and outcomes [179-186]. Deepali emphasizes that democratisation is already operational and scalable, focusing on infrastructure rollout rather than explicitly detailing impact pathways [30-32].
Unexpected Differences
Energy versus compute as the core bottleneck for AI democratisation
Speakers: Deepali Khanna, Shaun Seow
AI today is constrained by infrastructure, by who has access to GPUs Energy consumption is the deeper bottleneck; compute ownership less critical than access
While most participants frame the compute shortage as the primary barrier, Shaun’s emphasis on energy availability and the claim that compute ownership is secondary was not anticipated given the strong focus on GPU deployment throughout the session. This shift challenges the prevailing narrative that scaling GPU infrastructure alone will resolve the divide [4-6][314-321].
POLICY CONTEXT (KNOWLEDGE BASE)
Consensus that energy constraints are fundamental and must be addressed alongside compute hardware to achieve AI democratization goals [S46][S47][S28].
Latency making direct cross‑country compute sharing impractical despite calls for regional collaboration
Speakers: Andrew Sweet, Shaun Seow
Asian philanthropic networks should coordinate shared compute infrastructure rather than compete Physical latency limits hinder direct cross‑country compute sharing
Andrew’s proposal for Asian philanthropy to pool compute resources assumes technical feasibility of shared access, yet Shaun’s technical assessment of latency (50-100 ms over 10,000 km) reveals a practical limitation that was not foreseen in the collaborative vision. This technical constraint introduces an unexpected friction between policy-level coordination and engineering realities [305-307][325-329].
Overall Assessment

The panel broadly concurs on the urgency of democratising AI and the need for public‑interest infrastructure, but diverges on where to focus resources—whether on scaling compute hardware, strengthening data and open‑source ecosystems, addressing energy constraints, or redefining sovereignty and institutional models. These disagreements reflect differing assumptions about the primary bottlenecks and the most effective levers for equitable AI diffusion.

Moderate to high: while there is shared commitment to AI democratisation, the contrasting views on compute versus data, energy versus hardware, and sovereign versus interdependent governance create substantive strategic gaps. If unresolved, these gaps could lead to fragmented investments and limit the impact of public‑interest AI initiatives.

Partial Agreements
All speakers share the goal of democratising AI access and building public‑interest AI capacity, but propose different mechanisms: Deepali stresses large‑scale sovereign GPU deployment; Garg suggests a modular digital public good (Maitri) and intelligent prioritisation; Shikha provides quantitative demand and readiness indices to guide allocation; Andrew calls for regional philanthropic coordination and demand aggregation; Vilas advocates creating new intermediaries (e.g., Culpa Impact) to connect stakeholders. Their convergence on the overall aim contrasts with divergent pathways [30-32][109-110][221-246][305-307][188-190].
Speakers: Deepali Khanna, Dr. Saurabh Garg, Shikha Gitao, Andrew Sweet, Vilas Dhar
Democratization of AI is operational, scalable, and already underway Intelligent prioritisation over rationing of compute Compute demand and investment‑readiness indices to quantify needs Asian philanthropic networks should coordinate shared compute infrastructure rather than compete Need for new institutional intermediaries linking government, philanthropy, and innovators
These speakers agree that simply scaling compute hardware is insufficient for equitable AI outcomes. Martin highlights the need for contextual data and funded open‑source tools; Shaun points to energy constraints and the importance of access over ownership; Vilas stresses an active, interdependent diffusion model that ties compute to concrete use‑cases. All converge on the need for broader ecosystem considerations beyond raw compute capacity [122-131][147-150][314-321][179-186].
Speakers: Martin Tisné, Shaun Seow, Vilas Dhar
Overemphasis on compute; focus should shift to data and open‑source ecosystems Energy consumption is the deeper bottleneck; compute ownership less critical than access AI diffusion should be framed as an active impact model based on interdependence rather than a passive, Westphalian sovereignty approach
Takeaways
Key takeaways
The AI compute divide is a critical barrier; India’s public‑interest GPU initiative (38,000 GPUs) is presented as a model for democratizing access. Compute should be allocated through intelligent prioritization for public‑interest projects rather than strict rationing. The proposed Maitri platform (Multi‑Stakeholder AI for Trusted and Resilient Infrastructure) is envisioned as a modular digital public good to enable shared, customizable compute resources. Quantitative tools – a Compute Demand Index and an AI Investment‑Readiness Index – are needed to match hardware supply with actual needs and capacity of countries. Data availability and open‑source software are equally, if not more, limiting than hardware; scalable data‑stewardship mechanisms and sustained funding for critical open‑source components are required. Governance frameworks must be robust yet adaptable to diverse cultural, linguistic, and regulatory contexts, linking compute to concrete local use‑cases. Philanthropy can act as a catalyst by de‑risking investments, unlocking capital, and convening cross‑sector partnerships; the released report is intended to spur such collaboration. New institutional intermediaries (e.g., Kalpa Impact) are essential to bridge governments, philanthropies, and innovators and to translate compute into public‑interest outcomes. Future AI models may become smaller and domain‑specific, potentially reducing compute demand; energy consumption and latency are deeper systemic constraints than ownership of GPUs. Interdependence (mutual value exchange) is advocated over a narrow sovereignty model, emphasizing collaborative, shared prosperity.
Resolutions and action items
Release of the “Opening up computational resources for new AI futures” report with a feedback deadline of 31 March 2024. Development of the Maitri platform as a non‑binding, modular digital public good for shared compute, data, and governance resources. Creation and dissemination of the Compute Demand Index and AI Investment‑Readiness Index to inform resource allocation across Global South countries. Philanthropic entities (e.g., Rockefeller Foundation, Kalpa Impact) to explore catalytic financing for open‑source critical dependencies and for building institutional intermediaries. Aggregation of compute demand among Asian philanthropies to negotiate better pricing with cloud providers and to subsidize access for startups and impact organizations. Commitment to convene further South‑South partnership dialogues (e.g., India‑Africa) to define reciprocal compute‑for‑data arrangements.
Unresolved issues
Specific criteria and processes for the “intelligent prioritization” of compute access remain undefined. Concrete mechanisms for scalable data‑stewardship (data trusts, privacy‑preserving sharing) have not been agreed upon. How to operationalize South‑South reciprocal agreements on compute and data, especially given latency, sovereignty, and regulatory constraints. Sustainable funding models for the broader open‑source AI ecosystem beyond large corporate contributions. Ensuring that newly built compute infrastructure does not become under‑utilized “white‑elephant” facilities. Balancing the push for local compute sovereignty with the need for interdependence and shared global resources.
Suggested compromises
Adopt intelligent prioritization rather than outright rationing of compute resources (Dr. Garg). Blend sovereignty with interdependence: allow countries to retain control over local use‑cases while sharing surplus capacity through collaborative frameworks (Vilas Dhar). Combine hardware provision with parallel investment in talent, data readiness, and governance to avoid wasted GPU deployments (Dr. Shikha Gitao). Allocate philanthropic funding to both critical open‑source dependencies and to building institutional intermediaries that connect supply and demand (Martin Tisné). Aggregate demand across multiple nations/organizations to achieve economies of scale in cloud pricing, mitigating the high cost of individual compute access (Shaun Seow).
Thought Provoking Comments
The digital divide is rapidly becoming a compute divide… India is mobilising more than 38,000 GPUs as public infrastructure, building a sovereign, open, public‑interest AI compute ecosystem for the Global South.
Frames the whole session around compute as the new axis of inequality and positions India’s public‑sector effort as a concrete, scalable model, moving the conversation from abstract ‘democratisation’ to tangible infrastructure.
Set the agenda for the panel, prompting speakers to discuss governance, access models and the role of public‑interest compute. It shifted the tone from a generic discussion of AI to a focus on concrete hardware resources and national policy.
Speaker: Deepali Khanna
Most countries are not just seeking access to AI, but also seeking agency… we identified six foundational pillars – compute, capability, collaboration, connectivity, compliance and context – that must underpin a collective roadmap.
Introduces a structured framework that moves the debate from a single‑issue (compute) to a multi‑dimensional roadmap, highlighting that access alone is insufficient without skills, governance and contextual relevance.
Provided a reference point for later speakers (e.g., Martin, Vilas, Shikha) to align their points on data, open‑source, and institutional capacity, deepening the analysis and steering the conversation toward systemic solutions.
Speaker: Dr. Saurabh Garg
While compute is the biggest constraint today, future models might become much smaller and niche. Vishal Sikka noted that a human needs only ~2,000 calories a day – roughly the power of a 100‑watt bulb – suggesting we may be missing a more efficient path.
Challenges the prevailing assumption that massive GPU farms are the only way forward, opening space to consider model efficiency and energy‑wise alternatives.
Prompted other panelists (especially Martin and Shaun) to question the primacy of raw compute and to explore data, open‑source, and energy considerations, shifting the discussion toward model optimisation rather than just hardware scaling.
Speaker: Dr. Saurabh Garg
We risk ending up with ‘white‑elephant’ data centres that sit idle. Sovereignty and diffusion require contextual data, open‑source ecosystems, and funding for the critical low‑tier dependencies that keep the stack alive.
Highlights the danger of over‑investing in hardware without parallel investment in data and open‑source sustainability, thereby expanding the conversation beyond compute to the health of the entire AI stack.
Redirected the panel to address data bottlenecks and the fragility of open‑source funding, leading Vilas and Shikha to discuss data stewardship, investment readiness, and concrete metrics for demand.
Speaker: Martin Tisné
The IPL analogy is useful but misleading – sovereignty is a Westphalian, territorial notion that doesn’t translate to AI. True diffusion requires active, participatory institutions, not passive trickle‑down economics.
Critiques the conventional ‘sovereignty‑first’ narrative and reframes AI diffusion as an active, institution‑building exercise, introducing a political‑economy lens to the technical debate.
Spurred deeper reflection on institutional design, prompting Martin’s later scribbles on relational sovereignty and encouraging the panel to think about new intermediary bodies (e.g., Kalpa Impact) that can bridge public‑interest and private capital.
Speaker: Vilas Dhar
We have built a Compute Demand Index and an AI Investment Readiness Index for Africa – 2.5 million GPU‑hours a year are needed, yet the continent currently has only 5 % of that capacity. Demand must be matched with talent, power, data and use‑cases.
Introduces quantitative tools to move the discussion from rhetoric to measurable targets, exposing the gap between raw demand and actual readiness.
Grounded the conversation in data, leading other participants (e.g., Shaun, Martin) to discuss practical mechanisms for aggregating demand, financing gaps, and the importance of readiness beyond hardware.
Speaker: Dr. Shikha Gitao
Compute is actually overrated. The real bottlenecks are energy, latency, and skills. Sharing GPU capacity across 10,000 km (India‑Indonesia) isn’t feasible due to latency; instead we should aggregate demand to negotiate better cloud pricing and invest in talent.
Challenges the assumption that geographic sharing of compute is a viable solution, shifting focus to energy, latency, and human capital, and proposing a market‑based aggregation approach.
Redirected the dialogue from infrastructure sharing to market mechanisms and capacity‑building, reinforcing the earlier points about skills gaps and prompting Martin to elaborate on relational sovereignty.
Speaker: Shaun Seow
Sovereignty can be relational – indigenous data sovereignty is about authority over a people’s data, not just territorial control of servers. This suggests a global, open‑resilient collaborative stack rather than a strictly national one.
Introduces a nuanced concept of sovereignty that blends legal, cultural, and relational dimensions, expanding the debate beyond nation‑state hardware ownership.
Enriched the theoretical underpinnings of the discussion, influencing Vilas’s later remarks on interdependence and prompting the panel to consider governance models that respect both territorial and relational claims.
Speaker: Martin Tisné (later scribble)
Overall Assessment

The discussion evolved from an initial focus on compute as the new frontier of inequality to a multi‑layered exploration of what true AI democratization requires. Deepali’s opening set the compute‑centric premise, but Dr. Garg’s six‑pillar framework and his challenge to the compute‑only narrative opened space for broader considerations. Martin’s warning about idle data centres and the need for sustainable open‑source funding, Vilas’s critique of territorial sovereignty, Shikha’s demand‑readiness metrics, and Shaun’s emphasis on energy, latency and skills collectively shifted the conversation from hardware provision to systemic capacity‑building, governance, and market mechanisms. These pivotal comments redirected the panel toward concrete, data‑driven solutions and a re‑imagined notion of sovereignty, ultimately shaping a richer, more actionable dialogue about building inclusive AI ecosystems in the Global South.

Follow-up Questions
What governance model is India envisioning for treating compute as a public utility, and should compute access be rationed or priced differently for public‑interest applications?
Clarifying policy mechanisms is essential to ensure equitable, affordable access to compute resources for socially beneficial AI projects.
Speaker: Andrew Sweet (question to Dr. Saurabh Garg)
How can nations transition from being mere consumers of AI to genuine co‑creators, and what mechanisms can unlock data sets for training without compromising privacy?
Addressing both capacity building and privacy‑preserving data sharing is critical for inclusive AI development across countries.
Speaker: Andrew Sweet (question to Martin Tisné)
Why are many newly built data centres in the Global South under‑utilised (white‑elephant effect), and what strategies can ensure compute capacity is effectively used?
Understanding and preventing wasted infrastructure is needed to maximize the impact of compute investments.
Speaker: Martin Tisné
What scalable models or resources are needed to address the data bottleneck, including the development of data trusts or other stewardship mechanisms?
Effective data governance frameworks are required to make data accessible for AI while respecting privacy and sovereignty.
Speaker: Martin Tisné
Is there an ‘IPL‑style’ playbook for building public‑interest compute institutions, and does rapid commercial consolidation threaten the window for creating such public institutions?
Identifying successful institutional templates and timing is important to counteract market concentration and foster public‑good compute ecosystems.
Speaker: Andrew Sweet (question to Vilas Dhar)
How can reciprocal agreements between India and African countries be formalised to ensure compute infrastructure is exchanged for data or other value, creating a true South‑South partnership rather than a North‑South model?
Structured partnerships are needed to operationalise equitable compute sharing and mutual benefit across regions.
Speaker: Andrew Sweet (question to Dr. Shikha Gitau)
Given latency, data‑sovereignty, and physical constraints, how feasible is sharing compute across distant countries (e.g., India and Indonesia), and what mechanisms (e.g., demand aggregation, philanthropy subsidies) can make compute more accessible for startups and impact organisations?
Practical and financial solutions are required to overcome technical barriers and lower cost barriers for smaller actors.
Speaker: Shaun Seow
What new institutional intermediaries should be built in the next 12 months to connect compute, data, talent, and governance, enabling scalable public‑interest AI outcomes?
Defining and establishing these institutions is a concrete step toward operationalising compute democratization at scale.
Speaker: Vilas Dhar
Will future AI models continue to demand massive compute, or will a shift toward smaller, domain‑specific models reduce the compute barrier, and how does this affect the compute‑democratization agenda?
Anticipating model evolution informs long‑term compute infrastructure planning and resource allocation.
Speaker: Dr. Saurabh Garg
What comprehensive public‑interest frameworks are needed beyond compute—covering models, talent, data, and interoperability—to ensure AI serves human welfare and sustainable development?
A holistic governance approach is required to align AI development with public‑good objectives across multiple dimensions.
Speaker: Dr. Saurabh Garg
How should sovereignty be reconceptualised for AI—from a territorial, Westphalian view to a relational, agency‑based model (e.g., indigenous data sovereignty), and what implications does this have for global AI governance?
Rethinking sovereignty can enable more inclusive, community‑driven AI governance structures.
Speaker: Martin Tisné

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI for Social Empowerment_ Driving Change and Inclusion

AI for Social Empowerment_ Driving Change and Inclusion

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by highlighting that the impact of AI on employment is still unfolding and that companies publicly downplay potential job disruptions while privately acknowledging 30-40 % productivity gains that could translate into workforce cuts [1-2][5-8]. They argued that AI is already amplifying inequality, concentrating capital in a few tech giants and shrinking labor’s share of income, which makes the question of job impact central to any discussion of social empowerment [10-13]. Sabina warned that waiting for definitive evidence would be too late and called for immediate regulatory and institutional action to manage the inevitable evolution of AI [15-16][21].


Anurag asked whether AI investment will be monetized through labor reduction or new products and what kinds of jobs will be lost or created [31-38]. Sandhya responded that while coding can be automated, the remaining work requires human oversight of design, architecture and security, turning junior developers into “managers of AI” rather than being displaced [74-88]. She added that in marketing, finance and healthcare AI handles routine processing, but strategic planning, interpretation and decision-making remain human tasks, suggesting a shift rather than wholesale job loss [93-98][104-106].


Julie emphasized that effective AI governance depends on strong labor and regulatory institutions, co-creation with workers, and robust research to track real-world labor impacts [129-138]. She pointed to the Global Index on Responsible AI, which provides country-level data on labor rights and helps policymakers design evidence-based regulations, skills programs and social protections [233-242].


Sabina presented concrete evidence of recent layoffs in large tech firms and warned that efficiency gains are already causing job cuts, especially in the gig economy where algorithmic management lacks redress mechanisms [152-160][166-170]. She argued that in India, where only about 10 % of workers hold formal jobs, AI-driven precarity threatens a large share of the workforce and calls for urgent reforms in competition policy, antitrust, taxation, labor law, social protection and skill development [197-206][320-334]. She stressed that without swift action, the combined pressures of AI, climate change and economic shocks could deepen inequality and destabilize economies [211-218][176-184].


Sandhya concurred that waiting is not an option and called for proactive policy, leadership and continuous reskilling to adapt to AI’s rapid evolution [355-363]. The panel concluded that AI will reshape work profoundly, and coordinated, human-centric regulation and investment in skills and social safety nets are essential to mitigate risks while harnessing benefits [339-346][400-408].


Keypoints

Major discussion points


AI will generate significant productivity gains that are already translating into workforce reductions and broader inequality.


Sabina notes that companies privately admit “30 % to 40 % time-saving…which then translates into significant workforce cuts” [8] and that AI “enables surveillance…exacerbating inequality” [9-10]. She points to concrete evidence of layoffs in large tech firms [152-154] and highlights the gig-economy’s algorithmic management as a new labor-rights threat [160-164]. Anurag frames the investment-to-productivity link as “productivity…comes from labor reduction or new products” [31-36].


There is an urgent need for proactive regulation, strong institutions, and evidence-based policy to mitigate labor market disruption.


Julie stresses that “without strong institutions…regulation of what’s happening in the labor market” is impossible [129-131] and that a “human-centric” approach requires co-creation with workers [132-135]. She cites the AI4D research program and the Global Index on Responsible AI as tools that provide the evidence governments need [233-242]. Sabina adds that competition policy, antitrust, tax, labor law reforms, and universal social protection must be acted on now, not after more data [320-326].


Reskilling and redesign of work are central, but current education systems are ill-prepared.


Sandhya describes how Wipro has built “role personas” and specific learning modules for AI-augmented roles, with “COEs inside engineering colleges” [56-58][90-91]. She also notes that junior developers will become “managers of AI” rather than being displaced [84-87]. Sabina counters that only 4.1 % of India’s labor force reports formal skills, making large-scale AI training unrealistic [326-332].


The panel reflects divergent perspectives: tech optimism versus labor-market caution, compounded by disclosed conflicts of interest.


Sandhya argues that “we are not seeing a displacement” because most work is consultative and AI merely improves efficiency [60-63]. Sabina challenges this, calling the contention “largely untrue” and warning of “precariat” growth [1-4][300-306]. Anurag reveals his personal conflict-his foundation owns 70 % of Wipro-highlighting the tension between corporate and social-justice agendas [254-257].


Broader societal implications extend beyond jobs to health, precarity, and democratic governance.


Sabina links AI-driven gig-platform control to loss of redress [162-164] and warns of rising “precariat” (58 % self-employment, no safety nets) [305-307]. She also raises emerging cognitive-decline trends among youth and the risk of “outsourcing thinking” in education [313-317][380-383]. Sandhya echoes the need for human-centric policy to keep humanity at the centre of technological change [281-286].


Overall purpose / goal of the discussion


The panel was convened to assess how rapidly advancing AI will reshape labor markets, to contrast industry optimism with labor-market research, and to identify concrete policy, regulatory, and educational actions that can harness AI’s benefits while preventing widening inequality and job displacement.


Overall tone and its evolution


The conversation begins with a cautious, alarmist tone (Sabina’s warning that the impact “is yet to unfold” [1-2] and that companies hide job-loss figures [5-8]). It then shifts to a more optimistic, technocratic tone as Sandhya describes reskilling initiatives and the re-definition of developer roles [84-87]. Julie introduces a balanced, evidence-driven tone, emphasizing institutional capacity and research tools [129-135][233-242]. As the dialogue progresses, the tone becomes urgent and prescriptive, with repeated calls for immediate regulatory reforms and social protection [320-326][355-363]. The discussion closes on a reflective yet resolute tone, acknowledging the seriousness of AI’s societal impact while affirming that human-centric policies can steer outcomes [281-286][398-406].


Speakers

Sabina Dewan – Expertise: Labor market impacts of AI; Role/Title: Researcher on labor markets, associated with the Just Jobs Network (panelist) [S1]


Julie Delahanty – Expertise: AI governance, development policy; Role/Title: President, IDRC Canada [S2]


Sandhya Ramachandran Arun – Expertise: Technology and AI implementation; Role/Title: Chief Technology Officer, Wipro Limited [S5]


Anurag Behar – Expertise: Education, labor market research; Role/Title: Chief Executive Officer, Azeem Premji Foundation; Moderator of the discussion; Oversees three universities and works with >100,000 teachers [S7]


Additional speakers:


– None identified beyond the listed panelists.


Full session reportComprehensive analysis and detailed insights

1. Opening – labour-market uncertaintySabina Dewan opens by warning that the impact of artificial intelligence on jobs “is yet to unfold” and that firms publicly deny any threat while privately admitting 30 %-40 % time-saving that translates into workforce cuts [5-8]. She stresses that AI is not merely a set of algorithms but a socio-political system used for social, political and economic engineering[5-9]. Sabina cites AI-driven surveillance, hiring influence, and the concentration of capital in a few tech giants such as NVIDIA, whose market cap now exceeds $5 trillion [9-13]. She asks whether we can afford to wait for more evidence or must act now with regulation and new social institutions [15-21].


2. Framing the investment debate – Moderator Anurag Behar notes the massive flow of capital into AI and asks how that investment will be monetised – through productivity-driven labour reduction, new products and services, or a mix of both [31-36]. He then directs his first substantive question to Sandhya Ramachandran Arun about which jobs are likely to be displaced, which will be created, and the dynamics that drive those changes [40-44].


3. Technology-industry viewSandhya Ramachandran Arun explains that AI is a “very huge impact…as a disruptor,” forcing a rethink of job creation, talent reskilling, and hiring criteria toward learnability, communication and adaptability [48-54]. Wipro has built role-personas, specific learning modules, and Centres of Excellence inside engineering colleges to upskill every employee-from the board to the newest hire-through calibrated AI-augmented learning [84-86][90-91]. She notes that AI can now generate 50 %-70 % of code, but success still depends on human oversight of design, architecture and security, turning junior developers into “AI-managers” rather than eliminating their positions [84-88].


Sector-specific impacts – In marketing, AI produces high-quality visual, audio and video content, while strategic planning and ROI assessment remain human responsibilities [93-99]. In finance, AI handles transaction processing, but humans provide the wisdom needed to interpret data and align outcomes with human values [104-106]. In healthcare, AI augments clinicians and improves fraud detection [111-115]. Because Wipro’s model is consultative, it has not yet seen large-scale displacement [60-63]. Sandhya likens the technological trajectory to a horse-carriage → carbon emissions analogy, arguing that just as societies introduced guardrails for carbon, AI will require human-centred guardrails akin to those for nuclear energy[200-202].


4. Governance perspectiveJulie Delahanty stresses that effective AI governance cannot rely on technology alone; it needs strong labour-market institutions, regulatory bodies and a vibrant research ecosystem [129-131]. She highlights the AI4D programme’s human-centric co-creation with workers, employers and communities and its sub-Saharan Africa research programme that collects household, firm-level and worker-level data to understand real-world AI impacts [132-135]. Julie also points to the Global Index on Responsible AI (covering 138 countries with a dedicated labour-rights dimension) as a tool for evidence-based policy [233-242]. She warns against codifying regulations without sufficient evidence, urging a balance between innovation and safety [241-246].


5. Labour-market evidence and policy urgency – Sabina returns with concrete evidence: major tech firms have already laid off thousands of workers, publicly attributing cuts to macro-economic factors while AI-driven efficiency is a hidden driver [152-158]. She flags algorithmic management in the gig economy as a new labour-rights problem because workers can be removed from platforms with no avenue for redress [160-166]. Sabina argues that waiting for more data would be “way too late” and calls for urgent reforms of competition policy, antitrust law, and tax policy (including wealth and transaction taxes), alongside labour law, universal social protection and massive investment in skill-development systems[320-334][326-332][173-176][180-184].


India-specific context – Only about 10 % of Indian employment is formal, so loss of formal jobs would cascade into the informal economy [197-206]; 58 % of Indian workers are self-employed with no health insurance or safety net [305-311]. Emerging research shows cognitive decline, depression and anxiety among the current generation of young people, which could increase their replaceability by machines [313-317].


6. Conflict of interest disclosure – Anurag reveals that the Azeem Premji Foundation, which he leads, owns roughly 70 % of Wipro, creating a personal conflict between tech-sector interests and his mandate to protect the most vulnerable [254-257].


7. Education-sector alarm – Anurag warns that AI is “outsourcing thinking” for teachers and students, leading to cognitive decline and forcing universities to revert to paper-and-pencil examinations because AI-generated work is hard to assess [378-387][393-398]. He likens AI’s societal risk to nuclear technology, emphasizing that unlike nuclear hazards, AI permeates every individual’s daily life, making governance far more complex [401-406].


8. Consensus and concluding remarks – All panelists agree that AI will reshape work rather than simply eliminate it, creating new “AI-manager” roles while preserving functions that require human creativity, empathy and wisdom [281-286][300-306][355-363]. Immediate, evidence-based regulation is essential; waiting risks deepening inequality and labour-market precarity [15-16][173-176][355-363]. Julie’s Future of Work project and the Future Works Collective (funded by IDRC) are presented as platforms for re-thinking ways of working [447-452]. Sandhya stresses that “watching and waiting is not an option,” calling for proactive leadership, policy embedded in platforms, and continuous re-imagining of work and training [355-363]. The panel closes with a shared commitment to a human-centred, proactive governance framework supported by strong institutions, robust data and coordinated global action.


Session transcriptComplete transcript of the session
Sabina Dewan

say, you know, it’s yet to unfold. We don’t know what the impact is and it’s yet to unfold. I believe that that contention is actually largely untrue. And let me tell you why. When you talk to companies privately, publicly they will not own up to the potential job disruptions as a result of AI. And partly that is because many of the big companies actually are known to be formal job creators, right? And that is a very important part of their image and their contribution to economies and societies. But when you talk to them privately, in India especially, our research shows that they will own up to anywhere between 30 % to 40 % time saving, right, productivity gains, which then translates into significant workforce cuts.

We already have plenty of empirical evidence that suggests that… that AI systems are enabling surveillance, they’re influencing decisions about who gets work, when, and what entitlements people have access to. We also know that AI systems are grossly exacerbating inequality. If you just look at the market caps of some of the top technology companies, you know, NVIDIA’s $5 trillion market cap, right? So there’s a massive accumulation of capital that really, you know, capital share is growing and labor share of income is getting smaller and smaller. So I guess, you know, this discussion that talks about social empowerment, a key question in that is the question of the impact on jobs. And the question that I, you know, put out there is, so if you even buy the idea that we don’t know, that we don’t know what the impact is, what the impact is going to be.

Can we afford to just wait, right? Or do we need to take every action possible in terms of regulations, in terms of building social institutions, in terms of really working to build systems that can manage this inevitable evolution of AI, whether we like it or not. The last thing I’ll say is just, you know, yes, there have been technologies before. Yes, they’ve had their own forms of inclusion and exclusion. But at the end of the day, this is the first time where you have the very pioneers of that technology, Jeffrey Hinton, Stuart Russell, Dario Amadai, the very pioneers of the technology themselves are ringing alarm bells. And would we not be wise to heed them?

So with that, I hope, provocative context setting, I am really grateful. On behalf of the Just Jobs Network, again, with support from IDR. CNF CDO to welcome our really esteemed panelists. Mr. Anurag Bihar, who is the chief executive officer of the Azeem Premji Foundation, has very graciously agreed to chair this conversation, moderate the discussion. We have Dr. Julie Delhanti, who is the president of IDRC Canada. Thank you, Julie. And Ms. Sandhya Ramachandran Arun, who is the chief technology officer of Wipro Limited. Thank you so much for being here, Sandhya. So, Anurag, over to you.

Anurag Behar

Thank you. Thank you, Sabina. Good evening, everybody. Thank you. There’s so much. There’s so much investment going into AI. why is it going into a why is so much investment there in AI? We are in the fifth day of the AI summit. So this is like the 42nd kilometer of a marathon. Right? At this stage, such investment has to be justified by some monetization. And where is that monetization going to come from? It’s either going to come from productivity, which comes from labor reduction, or it is going to come from new products and services or both, a combination of both. That’s where it’s going to come from. Right? We will talk more about that. At this moment, my job is easy.

I’m going to just ask Sandhya, because she’s the representative of the technology world here really, that which way is this technology headed? And in very simple terms, what is she seeing its implications on jobs? I mean, what kind of jobs are going to get displaced, destroyed? And what kind of jobs are going to get created? and what’s the underlying dynamic because of which these jobs will be created and the jobs will be destroyed. So how does she see it in the world of technology? Let’s start with that.

Sandhya Ramachandran Arun

Sure, thank you so much. Thanks, Anurag, for the question. So as far as the tech industry is concerned, we are really witnessing a very huge impact of the AI evolution as a disruptor. We’ve had to revisit how job roles are created. We’ve had to revisit how talent has to be reskilled. And we have also revisited the responsibility, not just in terms of security, safety, but also in terms of what does it mean to our colleagues and our hiring. I think initially there was a huge amount of fear that we would not hire from colleges, which is now… despair because we’re broken. continues to hire from colleges, and so do our competitors. But the criteria for hiring has shifted to a more nuanced, a more calibrated way of looking at learnability, looking at whether a person communicates well, technical ideas, looking at whether a person is adaptable.

Because AI is a technology that is changing as we speak. So no one can claim to be an expert in AI and remain that way for the next five days, possibly, because there’s things that’s going on changing every day. With regard to our own talent, we have created role personas, and we have created very specific learning modules on how the role changes with AI. And everybody from the board to the CEO down to the youngest employee is going through a very calibrated learning process. And there is also a very… calibrated way in which services and ways of working are changing. So to that extent, we see a change. We are not seeing a displacement because most of the work that we do is consultative in nature, inspired of the market valuation erosion that we saw some time back because of a news from Anthropic and Palantir.

The insiders in the technology world were already aware of the transformative nature of these solutions coming up. And we have already been using these solutions significantly for over a year. So from a market sentiment point of view, possibly there was an erosion, but from a technology impact perspective, we have been bracing ourselves for the change and our journey of transformation continues.

Anurag Behar

I just have a follow -up on that, and then I’ll move to Julie. I’ll put it very, I mean, let’s say, a very, very simple, commonsensical question. Which is that, we are hearing about these tools where coding has become so much more easier, right? So, and this is not just about Wipro, it’s about the IT industry in general. So if coding is becoming so much easier, and 50 % or 70 % of coding can be done by these AI tools, then isn’t it inevitable that IT sector jobs will be lost? Or if there’s business or volume growth, much less hiring will happen. So that’s part one to my question. Part two is, if you move away from the IT world, and if you go to let’s say design and marketing, or, I mean, let’s say my world of the academy, the world of research, so many of research assistants and those of you who have used research assistants or work with research assistants, so much of that job is being done easily by AI.

So part one of my question, if coding is becoming so much more efficient, isn’t it inevitable jobs will be lost? so much hiding will not happen, whichever way. And aside from that, in the outside world, in other industries, what is it that you’re seeing?

Sandhya Ramachandran Arun

Sure. Let me just address the coding part of it. I think for over 15 years, the industry has been trying to explain to the outside world and as well as to the talent aspiring for careers with us that we do not have coding roles primarily. Coding is a very small task in what a software engineer does or a software developer does. There is the need to understand business outcomes. There’s a need to understand customer experience. There’s a need to understand architecture and what is a well -engineered code, right? So this is not new today. This has been in existence. I mean, I’ve been doing digital transformation for the last 15 years, and we’ve been trying to change how the world thinks about these roles.

Yes, the day is here when coding can be completely handed off to an AI agent. And that is indeed a fact, right? But the fact that supports the success of this code in business is really the ability to have a human oversee the design, the engineering, the architecture, the security, as well as delegating the coding work to an agent. So the role of a junior developer really becomes that of a little manager of AI, as opposed to saying, you’re displacing my job. The person’s actually going up if the person really is aware and aligns to what the organization needs in terms of figuring out what is required. And those are the trainings that are happening.

That’s what’s happening in terms of selection. We now have COEs inside engineering colleges where we are talking to universities about this as well. And what about other industries? whatever you’re seeing? So other industries we work with, there is a variation. So if you think about it, marketing, there’s a lot of work that gets offloaded. The strategy, the planning, the oversight on execution, the ROI on marketing still remains a strategic thinking job that remains with humans. But you can generate a lot of good quality visual, audio, and video content using AI today. And probably it’s making marketing a whole lot more efficient. Now, if you take finance, for example, again, a lot of processing gets taken over by AI, but it still needs a human to bring in wisdom in terms of how the data gets interpreted, how decisions are being made, and also to make sure that the AI aligns to human values in some sense.

So those kind of changes are happening in these functions. And that’s why I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. And I’m so excited to talk to you today. Industry -wise, there is a lot happening positive, I would say, in, say, healthcare, for example, even in banking, for example, where we are able to fight financial crimes a whole lot better. In healthcare, we are augmenting technicians, clinicians, and doctors with more intelligent input for decision -making. And while AI can make the decision, you don’t allow it to make

Anurag Behar

So, Sandhya, just put a pin on something that you said, and I’ll come back in the second round. You used the word human and wisdom. So just put a pin on that, and I’m going to come back to that in my second round. Julie, if Sandhya was less than as optimistic as she is, she wouldn’t be representing the tech world, you know. So one should expect that she’s as optimistic. she is. But what I wanted to ask you was that, you know, eventually, and, you know, from your vantage point, you know, you’re seeing how governments are dealing with this evolving situation, and not just an AI safety and, you know, all the other things, but particularly on labor markets.

So how can governments and institutions govern AI responsibly, such that any disruption in labor markets is sort of minimized or handled well, or the transition happens well? So let’s assume this picture that Sandhya has painted, that, of course, there’s something that’s going on, the reproductive, like she talked about the marketing and advising. So some people are going to lose jobs there. So what should government institutions do? How does one govern this situation, such that the benefits are maximized? And I’m talking particularly about labor. markets, not the other stuff, while harms are minimized.

Julie Delahanty

Yeah, thank you so much. I’m going to answer that question, but the last question just made me think about two things. One was, you know, I’m old enough to remember when computers first came around in the 70s and, you know, what we thought would happen with computers and the job losses that we anticipated. And, of course, we did lose jobs. There was a lot of labor disruption related to, you know, typing pools and different kinds of ways. But at the time, even home computers, nobody could even fathom what you would do with a home computer. The conversation then was that home computers would be used to develop recipes and that you’d have recipes because homes were only where homemakers were.

People couldn’t even, there’s such gendered ideas that people just could not understand what you would do with a home computer. So I think in the same way, some of what… is going to happen with AI in the labor market, we may not be able to anticipate just yet. So just as a reminder of where we came from with other important technologies. But when it comes to governance, I think the important issue is that it’s not really only about the technology, it’s really about institutions, it’s about workers, and it’s also about research. So when it comes to institutions, really without the kind of strong institutions in countries, regulatory institutions, labor institutions, strong research ecosystems that are able to really understand what’s happening in the labor market, I think it’s very difficult to end up having a strong regulation of what’s happening in the labor market.

So just those institutions are incredibly important to understanding where job losses might be, where biases might happen, and really investing in people and institutions is something that has to go hand in hand with our thinking around technologies. Another… Another area is around making sure that when we’re thinking about new technologies, that we’re making it very human -centric. And one of the things that the AI4D program does when we think, what do we mean by human -centric? It’s really about making sure that we’re co -creating new technologies with the co -creation of workers, of communities, of employers, so that we can understand how to enhance job quality, how to enhance productivity, rather than increasing inequalities or changing who benefits.

So really understanding who benefits, who’s going to face the kinds of disruptions is really important so that we’re not thinking about that as an afterthought. That we’re really shaping AI systems using that knowledge. And similarly… I think the importance of research in, and I’ll just give an example from our AI4D work is we’ve done a big research program with partners in sub -Saharan Africa that’s looking at, that’s collecting household data, firm level data, worker data, to understand what the real world impacts of AI are on labor markets. And it’s that kind of tracking, who’s going to benefit, understanding who’s going to be displaced, and how the tasks and skills are really changing that’s going to allow governments to better design and think about what kind of skills development they need, what kind of social protections they need, and how to support labor rights.

So really, I think growing AI responsibly doesn’t mean avoiding innovation or avoiding change, but it’s really about shaping AI so that it, it does strengthen labor markets and supports workers and creates more opportunities.

Anurag Behar

Thanks, Judy. Thank you so much. I’ll move to Sabina Sabina I mean since you are the labor market expert here amongst us and the researcher what is it that you see what is it that I mean there’s so much of news we have had this five days of this grand summer what is really going on what do we understand what we don’t understand in the context of the impact of AI on jobs how do you stack up

Sabina Dewan

so just a little tongue -in -cheek we go back to the 1600s we’d asked chat GPT then if Galileo was correct it would have said no way right so this technology you know for all the possibilities that it brings notwithstanding it is not just a technology we can’t just at AI as machine learning, large language models. It is a system, it is an instrument that is being utilized for social, political, and economic engineering. And my job is to look at the impact of that in labor markets. So if we limit ourselves just to the question of how many jobs will be lost, how many jobs will be gained, that’s A, not even an appropriate question.

Two, I agree with my fellow panelists that we don’t necessarily know what sort of new possibilities there might be. But what we do know, what we already see, is also something that Sundar talked about, which is the efficiency gains. And any time there are efficiency gains, there are layoffs. And please, you do the research, right? Like, I do my job. But look at the newspapers. Companies are laying off thousands of workers already. All the big tech companies have in recent years been laying off workers. Now, sure, they can say that this is a confluence of many factors. It’s not just AI, and most of them will not just ascribe it to AI. They might ascribe it to macroeconomic conditions, to the confluence of various other forces like the pandemic or trade shocks, all of which is true.

But AI is one really big disruption that comes on top of all the other disruptions, and there’s already plenty of evidence that is suggesting that these disruptions are not just changing the quantity of jobs in terms of how many companies are already laying off workers. Again, I mean, we’ve heard also projections from the, tech companies themselves, right, what the possible projections are. of disruptions and layoffs are going to be. But we also already have evidence of people being laid off. But then on top of that, I would say let’s look beyond just how many jobs are lost and how many jobs are gained to actually look at, I mean, take the gig economy, for example, and algorithmic management of gig workers.

That is a labor market issue. If a gig worker is wronged, the platform just, you know, they just get kicked off the platform. There’s no mechanism for redressal because it’s an algorithm that’s managing the worker. So who do you talk to? I mean, I can go on and on and on. Now, we might be separating out platforms from AI, but actually the algorithms are AI, and it’s embedded in a platform economy that is increasingly becoming the architecture for transactions, and it’s deeply troubling. And then the last thing I’ll say is, so I’ve already said… like in terms of quantity of jobs, we are already seeing evidence of layoffs, right? We’re already seeing the evidence of layoffs.

It’s just that people aren’t necessarily able to pinpoint and ascribe it to AI. That’s point number one. Two, we need to go beyond the question of quantity of jobs and also look at the impact of this technology on quality of jobs. And third, we need to really deeply think about, again, to Julie’s point, the architectures that can help mitigate some of the potential adverse effects of this technology, both on the quantity and the quality of jobs. And we don’t have the luxury to sit and wait and say, hey, let’s get the empirical evidence and then we’ll figure out what to do. That will be way too late, right? So what do we need? We need countries to think about competition policy.

We need to look. We need to look very closely at tax policy. We need to look very closely at how labor laws need to change. We need to look at social protection systems. We need to look at skill systems, everything that Julie just mentioned, right? But we have to start from an urgency about this is having a huge impact already. It is likely to be, you know, even bigger, and we don’t have the luxury of time to just sit back and wait and say, hey, we need more empirical evidence before we figure out how to mitigate the negative or potentially negative circumstances. So that is what I think is, you know, really, really urgent, that everyone get on that bandwagon and say we need to create these systems and ask for them and do it in our work and do it in our advocacy.

Anurag Behar

Yeah, thank you. I’ll just follow up. I’ll just follow up with it. So, and Julie, please. Pardon. for saying this. I’m saying this tongue in cheek and all my friends and colleagues here who are not from India please pardon me for what I’m going to say. So, you know, we Indians, why should we care about all this? And the reason I’m saying that is because, you know, well just about 9 or 10 % of our employment is in the formal sector. So even if there is huge disruption in labour markets, maybe 2 % of these people are going to lose their jobs, right? So why should we care about all this stuff? Do you have any comments?

Sabina Dewan

I do. You can be sure I do. You can be sure I have a comment about that. So if you look at the numbers, we are more than 90 % in India in formal employment. So Anurag’s exactly right. He knows his numbers. So, you know, essentially what you’re saying is 1 out of every 10 people stands to be potentially affected, right? That’s one way. of looking at it. The other way of looking at it is we have such few good jobs, right? We have such few jobs in the formal labor market. Only one in 10 people get to have a formal sector job. And now you’re taking that away as well, right? That stands to be disrupted. So again, we’re moving to a world of work that is much more precarious, much more insecure, much more uncertain, where workers don’t, they’re not even called workers anymore.

We call them self -employed contractors. They have no health insurance. They have, you know, this is the precaritization of the labor market. So not only do you have, you know, pandemic, climate change, energy transition, trade shocks, and AI destruction, but you have a world of work that is much more precarious, disrupting everything, but you also now are moving to a place where work is becoming more and more informal. Formal jobs are jobs are being, you know, gotten rid of in the name of, please apologize, in the name of efficiency gains, right? And so, yeah, so that’s why in India we should be really scared because we have such few formal jobs. And then imagine if you have these jobs in the IT sector in Bangalore disappearing, all the workers that used to go to bars and restaurants and get loans to buy houses and cars, that starts to disappear and it has cascading effects across the economy.

So, you know, so the impact of this is definitely in the global south. It is definitely beyond the few formal sector jobs. And it’s deeply disturbing. And we need to actually work to understand from technologists very clearly, you know, how these efficiency gains are going to happen and how they’re going to, how. What can different governments. and so on, and for architecture, public architecture, manage some of these changes. So we do need to care. Definitely need to care. We need to care urgently.

Anurag Behar

All right. So I’m going to come to Julie on this and come back to you, Sandhya, because I put a pin on something that you said, right? So, Julie, I mean, let’s assume that the alarm that Sabina is raising is at least half true, right? It’s more than half. You know, I have a deep conflict of interest, and I’ll tell you once I’m sort of done with this. So, Julie, how can, you know, what are the lessons that you’re seeing across countries, you know? You’re seeing the vast landscape, right, and IDRC has a view across the continents. So what lessons can be learned? From across the continent. such that AI is able to create opportunities, right, part of what Sandhya talked about, and doesn’t really deepen inequality or it minimizes it.

What are you seeing across the countries? Something, some good stuff.

Julie Delahanty

What is that? What is that regulation? And I think one of the – we have this AI – the Global Index on Responsible AI that some of you may have heard about. It’s been talked about a lot during the conference, or at least some of the sessions that I’ve been to. And really what that is, it’s the largest global rights -based data set on responsible AI. And what is distinctive about it is that it includes a dedicated focus on labor protection and the right to work. And by providing that country level, that sort of comparable data, it looks at 138 countries. So by providing that comparable data, it’s helping governments to understand what they might need to do better, what some of the issues are, how they can improve.

So really using that information to support governments in understanding what is the regulation, what is the solution that they need, not just – You know, it has to be based on some evidence. And I think the third big thing, which won’t be a surprise to anybody here that I’m saying this, is that we really need to have good evidence, and evidence really matters when it comes to these issues. So tools like the Global Index on Responsible AI really allows policymakers to move beyond kind of the abstract must -fix regulation to assess how governance of AI actually affects people’s rights, affects their jobs, affects their working conditions, and supports more proactive policymaking on labor regulations, again, skills, social protections, et cetera.

And I think equally important is that we’re still learning. There is no standardized, here is the regulation that you need codified. Through the kind of work that we’re doing, I think we’re learning what’s the balance between… supporting innovation… and still supporting regulation and safety. And I think working together across many countries to share that kind of information is what’s going to support us in finding the right tools.

Anurag Behar

Thanks, Julie. I’m going to come to you, Sandhya. But I just want to disclose something to all of you. That’s my conflict of interest. You know, Sabina is a labor market researcher, and naturally I would think she’s saying what she’s saying. Julie represents IDRC, and therefore she’s saying what she’s saying. Sandhya is the tech person here, so she’s saying what she’s saying. My problem is I’m responsible for this organization, Azeem Premji Foundation. And my problem is the following. My problem is that the foundation owns about 70 % of Wipro. Okay. So whatever is good for a tech company. is good for us, right? On the other hand, my job is not to take care of the technology and this world.

My job is to take care of the most vulnerable people in the country, right? The very poorest, the most marginalized, those who have no recourse to social protection. That’s my job. So I am a deeply conflicted person, right? Very deeply conflicted person. And I wanted to disclose that because I’m going to come to that towards the end. And it has a specific bearing on the question that I’m going to ask Sandhya, which is, you said something fascinating. And I want to put a pin on that. And I’m pulling your leg, you know, which is that rarely do you hear such words from a tech person. She talked about human care and wisdom, right? Didn’t she?

Okay. So, you know, really, my takeaway from what you were saying is that the tech tech stuff, you know, the coding and that kind of stuff, that can get automated. but something that is human understanding people understanding desires how do you work with people that’s what is hard to do and that’s something that you’re already seeing right so would you want to sort of comment on that

Sandhya Ramachandran Arun

yeah so the stereotype of techies aren’t human is a little unfair I think so don’t anchor it in your heads but then yeah so where do I start at the end of the day what does technology consulting and technology services try to do they try to help our client businesses become more successful and our client businesses in turn become more successful when they are innovative when they are creative when they are growing when they are growing when they are making their business and they are doing their business and they are doing their business and they are doing their business profitably and they are doing their business and they are doing their business and they are doing their business and they are doing their business and they are doing their business Or if they have already reached a state of maturity, they are trying to bring in a whole lot of efficiencies as well, right?

So it’s the S curve where you have an idea, you nail it, and then you kind of scale it, and then you kind of start sailing. And when you’re sailing, that’s when you become a big battleship and you have to focus on discipline and efficiency and ensure that you’re making profits just the same even while you’re running this big ship. But then the cycle doesn’t end there. It kind of keeps going. You keep coming up with new ideas, you keep scaling it, and you keep sailing it. And so profitability starts off with an investment, it grows, and then you have to become super efficient to remain profit. And I’m saying this to my boss because every dollar that we earn funds to the tune of about 66 cents whatever efforts the KMG Foundation uses for welfare, right?

And I think it’s a beautiful thing. It’s a beautiful thing. It’s a beautiful thing. model and I don’t think an AI could have thought of it. So therefore I do believe very strongly that creativity, wisdom, vision, foresight, human centricity is core to any technology disruptor that comes about. So if you imagine the days when there were horse carriages all the horses would have been kind of crowding the roads and people would have been going from place to place and at the end of the day you would have had a whole lot of methane which would have kind of ended the year a long time back because of global warming. But yes, vehicles did come and you did have carbon fuel and the evolution continues.

So I don’t think technology is going to stop. So human ingenuity is going to keep bringing technology disruptors. These technology disruptors are going to be more and more exponential in terms of what they can do. And it is up to humans to figure out how to create policy, how to create a governance mechanism, and how to ensure that we derive benefits, mitigate the risks, and at the same time ensure that humanity is at the center of all of this. Right? Now, this is easier said than done, but we’ve done it with nuclear energy. Despite the disasters, the fact that you and I are still alive today and thriving and living a better life than we ever lived in the last 100 years is an example that, yes, you could have accidents that are preventable, but accidents are created by humans.

And it’s up to the leadership to ensure that they put the required guardrails. It could be policy. It could be governance. It could be guidelines, whatever you call it. And you can even hire a leader. And you can even hire a leader. some of the

Anurag Behar

Yeah, it’s good to hear that, you know. I’m just going to come to one round and then perhaps have the last word, if I may. Yeah, okay. So, Sabina, what’s your take? What should we do? What should we do, really?

Sabina Dewan

So I’ve already kind of said what we should do, but first, Sandhya, everything you said really resonated with me, right? And I fully agree that, you know, that the humans have to take responsibility. I can think of a few very worrying scenarios where there are leaders in the world that have access to, you know, nuclear weapons that perhaps… shouldn’t have access to nuclear weapons, right? So how much confidence do we have in people, and particularly when you look at the overall trend of growing precarity? Again, take India alone. Fifty -eight percent of our employment is now self -employment. It is not, you know, and these are people, workers, that have no coverage of health insurance or any kind of safety net.

Add to that the fact that, like, there’s all these different forces coming that we don’t know, you know, if AI disrupts jobs or pandemics happen. We all saw what happened with migrant workers walking back to their villages, hundreds and thousands of migrant workers, right? There is a lot more precarity in the labor market than there ever has been in the past in modern history. And the problem is that regulation, and the regulation of the labor market, and the regulation of the labor market, and the regulation of the labor market, across the globe are getting weaker and weaker in this respect. And then we don’t have precedent, as Julie said. Like, we’re still trying to figure out exactly what we should do, right?

But I will say, I mean, I’ve said many, and I will say that, you know, in the meantime, AI is different because this is also the first time research is now showing, the first time that the current generation of young people have shown cognitive decline, right? So, I mean, rates of depression, rates of anxiety, cognitive decline. How does cognitive decline affect your ability to operate at work and then be replaced by machines that are more efficient because you’re getting stupider? Like, right? Sorry, but this is a really worrying scenario. So what should we do? I think I’ve said this. Multiple times. Regulation and building of social institutions. institutions, but I’ll take Julie’s challenge and say, okay, let’s go a level deeper.

I think we need to look at competition policy very closely. We need to look at antitrust. We need to look at tax, and within tax, we need to look at, you know, how do we do look at the full gamut from, you know, certain kinds of transaction taxes to what person, like a wealth tax, you know, the whole corporate tax rates, the whole gamut of tax tools that we have at our disposal. We certainly, in an area that I know well, need to look at labor regulations, right? There’s a lot of discussion now about what should happen in the gig economy, but, you know, what about, how do we, how do you distinguish if two people have lost their job, how do you distinguish, you know, between them?

You can’t say, okay, this person lost their job to AI, so we’re going to give them health care and, you know, other kinds of support, but… person we’re not right like you need to have universal systems of support for workers of health care of other forms of Social Security that that enable consumption smoothing as well so the economies keep functioning we need to invest heavily in our skill systems for all the talk I can talk about Indian numbers till I’m blue in the face of all the investment and talk about skills training in India only 4 .1 percent of respondents in our labor force survey acknowledge you know identify as having any kind of formal skills only 4 .1 percent despite you know us saying skill India and talking about investments and skills for the last you know well over a decade and a half skill systems.

There’s also well -documented research about how education, you know, the quality of education is so poor. So how do you take a young person in a remote part of India who can barely read and write, might say that I’ve graduated, I’ve done eighth grade or tenth grade, eighth class, tenth class, you know, even twelfth class, but can barely do foundational reading or math, right? How do you take them and say, I’m going to train you for AI. Yeah, that’s what I’m going to do. Like, it doesn’t work. It doesn’t work. So we need to actually fundamentally think about regulations. We need to very urgently work on our education and skill systems that meet people where they are.

We need to definitely think about universal social protection systems. That enable workers to transition between occupations from one sector to another, from one to another. to another from one occupation to another. And I can go into much more detail because this is something that my organization has worked a great deal on. What kind of systems do we need to enable workers to be better protected and be able

Anurag Behar

Thanks, Sabina. We’ve got, I think, five minutes or so, so I’m going to try and wrap up. Judy, would you want to comment?

Julie Delahanty

Yeah, I just want to make a fairly random point, I think. And that is, in addition to the Artificial Intelligence for Development program that we have, we also have a Future of Work project. And I think one of the interesting things there that we don’t talk about as much, everybody is very worried about job loss. That’s kind of the big, it’s job loss. But actually, one of the bigger issues that’s happening is rethinking how to work and ways of working and the disruption that’s happening within jobs and within the workplace. And so I think that’s a really good point. institutions and organizations, that’s not necessarily about job losses. It’s about a complete shift in the way that we do our work and how workers are going to adapt to that fundamental shift in the way that they work.

So it was just a random thought.

Anurag Behar

I don’t think it’s a random thought at all. I think it’s a salient foundational thought, you know, for this discussion. You want to comment on that one line? Because that’s such an important point.

Sabina Dewan

Yeah, no, I mean, just to say that, you know, the Future Works Collective is a global consortium of researchers that IDRC funds that JustJobs is part of that focuses exactly on that. So I agree 100 % that that is a foundational and very important issue.

Anurag Behar

Sandhya, what about you? How would you want to respond to everything Sabina has said?

Sandhya Ramachandran Arun

Look, I think… Watching and waiting is certainly not an option. I mean, we don’t want to be in a Game of Thrones situation when you’re saying winter is coming for some 22 seasons and then it comes. Nobody’s going to wait for it. So we know what’s coming, and we know what’s coming is also capable of evolving and changing tremendously. So we need to learn to change. And yes, we do need to elect good leaders. We do need to have policy at all levels. We need to have policy embedded in platforms. And of course, we need to have a lot of reimagining work and training of workforce. So yes, I think to some extent, painting doom and gloom is good.

Then we start acting, right? But to some extent, I think it also shouldn’t make you paranoid that you become deer in headlights. So yes, we should act, and we should move forward on all of that that all of us agree on.

Anurag Behar

It seems so. It seems so, absolutely. No, but, you know, I think that’s, in some senses, a very good summary, what you just now said, right? What I wanted to say was that this phrase that’s used, boomer and doomer, boomer and doomer. So in a sense, my head is the boomer and my heart is the doomer, given my role. I want to take you just for a minute, which is my job is more to do with education. So we run three universities. We work with, at any point in time, we are working with more than 100 ,000 teachers, right? And so I’m an education person. I’m not the labor market or the tech person here, right? And I am deeply concerned by the effect of AI on education, deeply, deeply concerned.

In fact, I feel that AI is attacking the very foundation of education. The very foundation of education. What AI is doing is saying, the phrase artificial intelligence, it suggests what it does, which means you essentially outsource your thinking. So teachers are outsourcing their thinking and students are outsourcing their thinking. So essentially, and that’s what Sabina was referring to, but she was referring to in the context of social media, that for the first time in this round of sort of assessments, we are seeing cognitive declines, or on test measures we are seeing declines in student performance. I cannot tell you how serious the issue is. And it’s impossible to regulate this. It’s impossible to regulate this because it’s everywhere.

So the only way we are able to deal with this, in the universities at least, is that all assessment, examination, is now returning to the old world paper and pencil, in class test. No home assignments, no project planning, no test. No project work, nothing. Just come here and sit. and write the examination. It is truly serious. I mean, we don’t know how to tackle this right now. And the reason I talk about that is I want to go back to what the analogy that Sandhya used. And I’m so glad that she did that, which is that it is as serious as the nuclear technology. It is as serious as the nuclear technology. And in one very deep way, it is far more serious than the nuclear technology because nuclear technology did not reach out and affect every individual human being.

The possibility of policies and governance to be able to circumscribe, to put boundaries, to manage, those possibilities were far greater. And the possibilities here were highly disruptive, not highly, perhaps the most disruptive of technologies is in retail form, right? This is retail transformation of humanity. It is so hard. to do this. But I’m really glad. I’m glad that with the three of you here, we have this sort of reasonable conclusion, if I may say so, that we are really facing something as serious as the nuclear technology. And you can’t run away from it. It’s happening. You can’t run away from it. Job losses will happen. We’ve got to figure a way out of it. And I would want to close on this human note, that eventually, perhaps, those jobs that require wisdom, empathy, care, human understanding, they are going to be the hardest to replace if at all.

And they will stay. And that’s what one can see in the tech world. So with that, I want to thank all three of you. Thank you so much. I want to thank all of you for coming here. Thank you very much. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (21)
Factual NotesClaims verified against the Diplo knowledge base (6)
!
Correctionhigh

“NVIDIA’s market cap now exceeds $5 trillion”

Industry data show Nvidia’s market value was about $3.3 trillion at the end of 2024, not over $5 trillion, and peaked around $3.5 trillion in a brief period [S78] and [S79].

Confirmedhigh

“Effective AI governance requires input from diverse stakeholders including the scientific community, innovators, and civil society organizations.”

The knowledge base states that comprehensive AI governance must involve a broad range of actors beyond government, such as scientists, innovators and civil society [S74].

Additional Contextmedium

“AI4D programme’s human‑centric co‑creation with workers, employers and other stakeholders.”

While the report mentions the AI4D programme, the knowledge base discusses inclusive AI initiatives that emphasize co-creation with workers and other stakeholders, aligning with the programme’s approach [S76].

Additional Contextmedium

“Wipro has built role‑personas, specific learning modules, and Centres of Excellence inside engineering colleges to upskill every employee.”

The knowledge base describes similar skilling initiatives in engineering colleges involving national platforms and multiple tech firms, indicating that such models exist though not specifically attributed to Wipro [S86] and [S87].

Additional Contextmedium

“Labour‑market uncertainty about AI’s impact on jobs; firms publicly deny threat while privately admitting 30‑40 % time‑saving that translates into workforce cuts.”

Research shows that despite rapid AI adoption, the labour market has remained stable and fears of large-scale job loss have not materialised, providing nuance to the claim of imminent workforce cuts [S73].

Additional Contextlow

“AI can now generate 50‑70 % of code, turning junior developers into “AI‑managers” rather than eliminating their positions.”

Broader discussions in the knowledge base note that AI augments developer tasks and changes job roles, though they do not provide the specific 50-70 % figure, highlighting the shift toward AI-assisted development [S83].

External Sources (87)
S1
AI for Social Empowerment_ Driving Change and Inclusion — – Sabina Dewan- Julie Delahanty- Sandhya Ramachandran Arun – Sabina Dewan- Sandhya Ramachandran Arun – Sabina Dewan- S…
S2
Responsible AI for Shared Prosperity — -Co-Moderator- Role/title not specified -Julie Delahanty- President of Canada’s International Development Research Cent…
S3
https://app.faicon.ai/ai-impact-summit-2026/responsible-ai-for-shared-prosperity — you. We’ll now have a small… Changeover in our panellists. If I could ask, if we could have another big round of appla…
S4
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-for-shared-prosperity — And I hope the idea is spreading and growing. Thank you. Thank Co-Moderator: you. We’ll now have a small… Changeover…
S5
AI for Social Empowerment_ Driving Change and Inclusion — say, you know, it’s yet to unfold. We don’t know what the impact is and it’s yet to unfold. I believe that that contenti…
S6
AI for Social Empowerment_ Driving Change and Inclusion — So with that, I hope, provocative context setting, I am really grateful. On behalf of the Just Jobs Network, again, with…
S7
AI for Social Empowerment_ Driving Change and Inclusion — -Anurag Behar- Chief Executive Officer of the Azeem Premji Foundation, moderator of the discussion, works in education s…
S8
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — Add to that the fact that, like, there’s all these different forces coming that we don’t know, you know, if AI disrupts …
S9
Flexibility 2.0 / Davos 2025 — Kumiko Seto: I’m Kumisato from Forbes Japan, and I’m honored to be your moderator. So now let me introduce our disting…
S10
(Day 1) General Debate – General Assembly, 79th session: morning session — Julius Maada Bio – Sierra Leone: I congratulate His Excellency Philemon Yang on his election as president of the 79th s…
S11
WSIS Action Lines C4 and C7:E-employment: Emerging technologies in the world of work: Addressing challenges through digital skills — This comment challenges the assumption that we can effectively plan for AI’s impact when we don’t yet understand its ful…
S12
World Economic Forum Panel: Sovereignty and Interconnectedness in the Modern Economy — Champagne argues that the speed, scope, and scale of current changes require immediate action rather than waiting. He co…
S13
Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44 — Moreover, the analysis delves into the regulation of AI. It argues that human beings are relatively stable over time, wh…
S14
Responsible AI for Shared Prosperity — Thank you. Thanks, everybody, for being here and for the Deputy Prime Minister for welcoming us. We’re incredibly proud …
S15
United Nations High-Level Leaders’ Dialogue — Drake argues that while some jobs may be lost to AI, about 25% of jobs will be transformed rather than eliminated, requi…
S16
Harnessing Collective AI for India’s Social and Economic Development — Kushe Bahl believes that AI will fundamentally reshape jobs rather than just replacing them outright. He suggests this t…
S17
Opening — Balance needed between innovation and regulation
S18
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — – Brian Armstrong- François Villaroy de Galhau Legal and regulatory | Economic Regulation and innovation must work tog…
S19
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Economic | Future of work Study of LLMs in call centers showing 14% average increase in productivity, up to 35%. Studie…
S20
AI/Gen AI for the Global Goals — Shea Gopaul: So thank you, Sanda. And like Sandra, I’d like to thank the African Union, as well as Global Compact. i…
S21
Ethical AI_ Keeping Humanity in the Loop While Innovating — Sometimes proactive regulation is necessary to prevent unchangeable negative consequences, rather than only acting after…
S22
Education meets AI — Teaching critical thinking and discerning facts from misinformation is crucial in the digital age. The traditional educa…
S23
AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — In conclusion, global education systems are currently grappling with a learning crisis, with literacy and numeracy level…
S24
How AI Drives Innovation and Economic Growth — The tone was notably optimistic yet pragmatic, described as representing “hope” rather than the “fear” that characterize…
S25
How AI Drives Innovation and Economic Growth — Summary:The speakers show broad agreement on AI’s transformative potential for development but significant disagreements…
S26
How AI Drives Innovation and Economic Growth — The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on impl…
S27
UNSC meeting: Regional arrangements for peace — It can assist decision-making in Security Council with analysis that takes into account divergent perspectives
S28
WS #103 Aligning strategies, protecting critical infrastructure — These key comments shaped the discussion by emphasizing the complex, interconnected nature of cybersecurity challenges. …
S29
7th edition — The digital divide is not an independent phenomenon. It reflects existing broad socio-economic inequalities i…
S30
Laying the foundations for AI governance — ## Societal and Democratic Implications
S31
Figure I: The Global Risks Landscape 2019 — Beyond the economic risks, there are potential political and societal implications. For example, a world of increasingly…
S32
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Eltjo Poort, Vice President Consulting at CGI in the Netherlands, supported this view: “Regulation does not hamper innov…
S33
Laying the foundations for AI governance — Companies want clear regulation but need to avoid unpredictability and fragmentation across jurisdictions Legal and reg…
S34
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Technology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semic…
S35
WSIS Action Lines C4 and C7:E-employment: Emerging technologies in the world of work: Addressing challenges through digital skills — This comment challenges the assumption that we can effectively plan for AI’s impact when we don’t yet understand its ful…
S36
HIGH LEVEL LEADERS SESSION IV — This indicates the recognition that companies have a role to play in shaping policies and providing examples of good pra…
S37
(Day 1) General Debate – General Assembly, 79th session: morning session — Julius Maada Bio – Sierra Leone: I congratulate His Excellency Philemon Yang on his election as president of the 79th s…
S38
Economists and Climate Change – Homework Comes First — While waiting for better science – and this better happen soon, lest it become discredited – there is much to be done be…
S39
AI for Social Empowerment_ Driving Change and Inclusion — “We need countries to think about competition policy.”[48]. “We need to definitely think about universal social protecti…
S40
How AI Drives Innovation and Economic Growth — Arguments:Labor market disruption is the biggest concern, especially for entry-level jobs that drive economic developmen…
S41
AI for Social Empowerment_ Driving Change and Inclusion — Effective governance of AI’s labor market effects requires robust institutional infrastructure including regulatory bodi…
S42
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Bhan argues that AI’s impact on jobs cannot be viewed in isolation but must be considered alongside broader economic dis…
S43
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — The discussion highlighted AI’s integration across multiple business functions and industries. Dowson Tong described how…
S44
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S45
Empowering Workers in the Age of AI — Focus on augmentation and transformation of existing roles rather than wholesale job replacement
S46
Shaping the Future AI Strategies for Jobs and Economic Development — This discussion focused on AI-driven strategies for workforce and economic growth, examining how artificial intelligence…
S47
Shaping the Future AI Strategies for Jobs and Economic Development — A central theme emerged around collaboration rather than displacement of human workers. Panelists emphasized that AI sho…
S48
AI (and) education: Convergences between Chinese and European pedagogical practices — **Norman Sze** (former Chair of Deloitte China) provided industry perspective on AI’s impact on professional work, notin…
S49
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Historically speaking, again, the technology oftentimes will eliminate or reduce the number of jobs. It might increase p…
S50
From India to the Global South_ Advancing Social Impact with AI — This comment directly addresses one of the most anxiety-provoking aspects of AI adoption – job displacement. By framing …
S51
AI for Social Empowerment_ Driving Change and Inclusion — Sabina points out that AI is causing major disruptions that are already leading companies to lay off workers. Private re…
S52
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Economic | Future of work Study of LLMs in call centers showing 14% average increase in productivity, up to 35%. Studie…
S53
Reinventing Digital Inclusion / DAVOS 2025 — Generative AI is expected to create exponential returns in productivity, particularly in enterprise systems. However, th…
S54
AI/Gen AI for the Global Goals — Shea Gopaul: So thank you, Sanda. And like Sandra, I’d like to thank the African Union, as well as Global Compact. i…
S55
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — An audience member articulated what they described as “overwhelming pessimism” among young people about career prospects…
S56
AI for Social Empowerment_ Driving Change and Inclusion — Effective governance of AI’s labor market effects requires robust institutional infrastructure including regulatory bodi…
S57
Ethical AI_ Keeping Humanity in the Loop While Innovating — Sometimes proactive regulation is necessary to prevent unchangeable negative consequences, rather than only acting after…
S58
Global Data Partnership Against Forced Labour: A Comprehensive Discussion Summary — Understanding what policy changes are needed will help drive systemic prevention rather than just remediation of forced …
S59
Education meets AI — Teaching critical thinking and discerning facts from misinformation is crucial in the digital age. The traditional educa…
S60
AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Global education systems are currently facing a learning crisis, with many schools falling short of literacy and numerac…
S61
Discussion Report: AI Implementation and Global Accessibility — -Educational transformation is essential: Current educational systems must change to prepare people for unknown future j…
S62
How AI Drives Innovation and Economic Growth — The tone was notably optimistic yet pragmatic, described as representing “hope” rather than the “fear” that characterize…
S63
How AI Drives Innovation and Economic Growth — Summary:The speakers show broad agreement on AI’s transformative potential for development but significant disagreements…
S64
How AI Drives Innovation and Economic Growth — The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on impl…
S65
Redrawing the Geography of Jobs / Davos 2025 — Idekoba offered a more cautious perspective, noting that while remote work enables access to global talent, it is not ap…
S66
When Code and Creativity Collide: AI’s Transformation of Music and Creative Expression — Moderate to significant disagreements with important implications. The speakers’ different perspectives on AI’s current …
S67
Laying the foundations for AI governance — ## Societal and Democratic Implications
S68
From principles to practice: Governing advanced AI in action — **Systemic Societal Risks**: Broader societal impacts, particularly profound labor market disruption that could create s…
S69
Disinformation and Misinformation in Online Content and its Impact on Digital Trust — The implications of this shift extend beyond individual decision-making to broader societal trust in information systems…
S70
Report by the Commission on the Measurement of Economic Performance and Social Progress — 81. The challenges posed by this variety of health measures are not confined to crosscountry comparisons b…
S71
review article — There are many working definitions of global health. Some emphasize certain types of health problems (e.g., co…
S72
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — Long-term employment impacts remain uncertain despite current stability in hiring patterns
S73
Labour market remains stable despite rapid AI adoption — Surveys show persistent anxiety aboutAI-driven job losses. Nearly three years after ChatGPT’s launch, labour data indica…
S74
Closing remarks – Charting the path forward — Bouverot argues for comprehensive inclusion in AI governance discussions, extending beyond just governmental participati…
S75
AI for Democracy_ Reimagining Governance in the Age of Intelligence — at the AI Summit here in Delhi. I am deeply honored to be here today in the presence of the honorable speaker to address…
S76
Inclusive AI Starts with People Not Just Algorithms — Artificial intelligence | Social and economic development
S77
GEO-politics/economics/emotions in the AI era — First isthe role of tech companies,powerful actors with unprecedented influence across all three realms: geopolitics, ge…
S78
Nvidia leads tech companies in record-breaking market achievements in 2024 as AI fuels growth — Nvidia hasemergedas the standout performer in the global market capitalisation race for 2024, driven by a surge in deman…
S79
Apple loses top spot as Nvidia takes market lead — NvidiaovertookApple on Friday to become the world’s most valuable company, driven by soaring demand for its AI chips. Th…
S80
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — Tanzania: Thank you, Mr. Chair, all protocol observed. We gather at a crucial moment in history, forced with the chall…
S81
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — A lot of investment is going into the development of technologies
S82
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Thank you for having us and putting this together. I know that we have more people here than Sundar has at his keynote. …
S83
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — This comment is exceptionally insightful because it cuts through the doomsday rhetoric with concrete data, reframing the…
S84
https://app.faicon.ai/ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — And you can’t run away from it. It’s happening. You can’t run away from it. Job losses will happen. We’ve got to figure …
S85
How AI Is Transforming Indias Workforce for Global Competitivene — Are we having the same conversations? Are we facing the same kind of issues? I think what I’ve just heard from my fellow…
S86
Skilling and Education in AI — So, good morning. AI is an opportunity and an enabler. So let me begin with a few words about NSDC itself. So this is a …
S87
Skilling and Education in AI — We have a lot of different things going on. We have a lot of different things going is to create close to around 22 comp…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sabina Dewan
5 arguments146 words per minute2588 words1062 seconds
Argument 1
Evidence of AI‑driven layoffs and gig‑economy harms
EXPLANATION
Sabina points out that AI is already causing large‑scale layoffs in major tech firms and that algorithmic management in gig platforms removes workers’ ability to seek redress. She stresses that these trends are visible now rather than speculative future scenarios.
EVIDENCE
She cites recent mass layoffs across big-tech companies, noting that while firms attribute cuts to multiple factors, AI is a significant driver ([152-158]). She also describes how gig-workers are managed by AI algorithms that can instantly remove them from platforms without any grievance mechanism, highlighting a new form of labor precarity ([160-166]).
MAJOR DISCUSSION POINT
Evidence of AI‑driven layoffs and gig‑economy harms
DISAGREED WITH
Sandhya Ramachandran Arun
Argument 2
Need for competition policy, antitrust, tax reform, universal social protection, and skill systems
EXPLANATION
Sabina argues that to mitigate AI‑induced labor disruptions, governments must overhaul competition and antitrust rules, redesign tax structures, and expand universal social protection and skill‑development programmes. These measures are presented as urgent, systemic responses rather than optional tweaks.
EVIDENCE
She outlines a suite of policy levers-including competition policy, antitrust, various tax tools, labor regulations, universal social protection, and comprehensive skill systems-required to address AI’s impact on work ([320-334]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources stress the urgency of comprehensive policy measures-including competition, antitrust, tax, labor-law reforms and universal social protection-to mitigate AI’s labor impacts [S1] and [S5].
MAJOR DISCUSSION POINT
Need for competition policy, antitrust, tax reform, universal social protection, and skill systems
AGREED WITH
Julie Delahanty, Sandhya Ramachandran Arun
DISAGREED WITH
Julie Delahanty
Argument 3
Platform‑based algorithmic management in the gig economy alters labor conditions
EXPLANATION
Sabina highlights that AI‑powered platforms now control gig workers through algorithmic management, eliminating traditional avenues for redress and creating a new, precarious form of employment. This shift raises concerns about workers’ rights and protections.
EVIDENCE
She explains that gig workers are subject to AI algorithms that can instantly dismiss them from platforms without any mechanism for redress, underscoring a structural change in labor relations ([160-166]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The gig-platform algorithmic management model and its lack of grievance mechanisms are documented in the literature, confirming the shift in labor conditions [S1] and [S5].
MAJOR DISCUSSION POINT
Platform‑based algorithmic management in the gig economy alters labor conditions
Argument 4
Cognitive decline and mental‑health trends risk reducing human work capacity, heightening replacement risk
EXPLANATION
Sabina warns that emerging evidence of cognitive decline, depression, and anxiety among younger populations may diminish their ability to compete with AI, increasing the likelihood of job displacement. She links these health trends directly to labor market vulnerability.
EVIDENCE
She references recent research showing the first generation experiencing measurable cognitive decline, higher rates of depression and anxiety, and argues that such declines could make workers more replaceable by efficient machines ([313-317]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recent research showing measurable cognitive decline, higher depression and anxiety rates among the current generation is cited as a factor that may increase vulnerability to AI displacement [S5].
MAJOR DISCUSSION POINT
Cognitive decline and mental‑health trends risk reducing human work capacity, heightening replacement risk
Argument 5
Immediate policy action is required; waiting for more evidence will be too late
EXPLANATION
Sabina stresses that the AI‑driven labor shock is already unfolding, so postponing regulation until more data are gathered would miss the window for effective intervention. She calls for swift, decisive policy measures now.
EVIDENCE
She notes that waiting for empirical evidence would be “way too late” given the already observable impacts of AI on jobs and urges immediate action ([173-176]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses argue that the speed and scale of AI-driven change demand swift policy responses, warning that postponing action until more data are available could miss the window for effective intervention [S11] and [S12].
MAJOR DISCUSSION POINT
Immediate policy action is required; waiting for more evidence will be too late
DISAGREED WITH
Julie Delahanty
J
Julie Delahanty
4 arguments150 words per minute1060 words422 seconds
Argument 1
Historical tech disruptions show outcomes uncertain and require institutional insight
EXPLANATION
Julie draws a parallel between the early days of personal computers and today’s AI, noting that the social and labor impacts of new technologies are hard to predict. She argues that strong institutions are essential to navigate such uncertainty.
EVIDENCE
She recounts how early expectations about home computers were wildly inaccurate, leading to unforeseen job losses, and emphasizes that without robust regulatory, labor, and research institutions, effective governance of AI-driven labor changes is difficult ([121-128]; [129-132]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Historical examinations of past technological shocks highlight the unpredictability of outcomes and the need for strong institutions to navigate such uncertainty [S13].
MAJOR DISCUSSION POINT
Historical tech disruptions show outcomes uncertain and require institutional insight
Argument 2
Strong regulatory and labor institutions, human‑centric co‑creation, and evidence from AI4D research are essential
EXPLANATION
Julie asserts that effective AI governance depends on strong labor and regulatory bodies, co‑creating technologies with workers, and gathering solid evidence on AI’s labor impacts. She cites the AI4D program’s data‑collection efforts as a model for informing policy.
EVIDENCE
She highlights the need for strong institutions to understand job losses and biases, and describes AI4D’s large-scale research program that collects household, firm, and worker data across sub-Saharan Africa to track AI’s real-world labor effects ([130-138]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for robust regulatory and labor bodies, as well as the AI4D programme’s large-scale data collection to inform policy, are documented in the sources [S5] and [S14].
MAJOR DISCUSSION POINT
Strong regulatory and labor institutions, human‑centric co‑creation, and evidence from AI4D research are essential
AGREED WITH
Sabina Dewan, Sandhya Ramachandran Arun
DISAGREED WITH
Sabina Dewan
Argument 3
AI reshapes ways of working across sectors, not just causing job loss
EXPLANATION
Julie emphasizes that AI is prompting a broader transformation in work practices, requiring new organisational models and skill sets, rather than merely eliminating jobs. She points to the need for rethinking work itself.
EVIDENCE
She notes that the biggest issue is rethinking how we work, with AI disrupting not only employment numbers but also the nature of work and workplace practices ([341-345]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses indicate that AI will transform roughly a quarter of jobs rather than eliminate them, reshaping work practices across multiple sectors [S15] and [S16].
MAJOR DISCUSSION POINT
AI reshapes ways of working across sectors, not just causing job loss
AGREED WITH
Sandhya Ramachandran Arun
Argument 4
Ongoing evidence gathering and learning are vital to balance innovation with regulation
EXPLANATION
Julie stresses that continuous data collection and learning are crucial for crafting regulations that protect workers while still fostering AI innovation. She underscores the role of evidence‑based policymaking.
EVIDENCE
She states that good evidence matters for policy, referencing tools like the Global Index on Responsible AI that provide comparable data for 138 countries, and notes that ongoing learning helps balance innovation with safety ([241-246]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of continuous data collection tools such as the Global Index on Responsible AI for evidence-based policymaking is highlighted, alongside the need to balance innovation with safety [S5] and [S17].
MAJOR DISCUSSION POINT
Ongoing evidence gathering and learning are vital to balance innovation with regulation
AGREED WITH
Sabina Dewan, Sandhya Ramachandran Arun
DISAGREED WITH
Sabina Dewan
S
Sandhya Ramachandran Arun
5 arguments158 words per minute1684 words636 seconds
Argument 1
Automation of coding creates “AI‑manager” roles rather than pure displacement
EXPLANATION
Sandhya argues that while AI can automate the act of coding, human oversight of design, architecture, and security remains essential, turning junior developers into managers of AI tools rather than eliminating their jobs. This reframes displacement as role evolution.
EVIDENCE
She explains that coding can be fully handed to an AI agent, but success depends on human oversight of design, engineering, architecture, and security, turning junior developers into AI managers ([84-88]).
MAJOR DISCUSSION POINT
Automation of coding creates “AI‑manager” roles rather than pure displacement
AGREED WITH
Julie Delahanty, Anurag Behar
DISAGREED WITH
Anurag Behar
Argument 2
Policy guardrails, leadership, and governance mechanisms must accompany AI development
EXPLANATION
Sandhya stresses that AI’s rapid evolution requires clear policy guardrails, strong leadership, and embedded governance mechanisms at all levels to ensure responsible deployment. She likens this to managing other high‑risk technologies like nuclear energy.
EVIDENCE
She outlines the need for policy, leadership, and governance mechanisms, comparing AI governance to nuclear safety and emphasizing that policy, guidelines, and leadership are essential to steer AI responsibly ([269-293]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The literature stresses the necessity of clear policy guardrails, strong leadership and embedded governance mechanisms for AI, likening them to nuclear-safety frameworks [S5].
MAJOR DISCUSSION POINT
Policy guardrails, leadership, and governance mechanisms must accompany AI development
AGREED WITH
Julie Delahanty, Sabina Dewan
Argument 3
AI augments marketing, finance, and healthcare while humans retain strategic and oversight functions
EXPLANATION
Sandhya describes how AI can take over routine processing in marketing, finance, and healthcare, yet strategic planning, execution oversight, and decision‑making remain human responsibilities. This illustrates a collaborative rather than purely substitutive role for AI.
EVIDENCE
She notes that AI offloads much of the operational work in marketing while strategic planning and ROI analysis stay with humans ([93-99]), similarly in finance AI handles processing but humans provide wisdom for interpretation and alignment with values ([97-98]), and in healthcare AI augments clinicians with intelligent decision-making support ([104-106]).
MAJOR DISCUSSION POINT
AI augments marketing, finance, and healthcare while humans retain strategic and oversight functions
AGREED WITH
Julie Delahanty
Argument 4
Human creativity, wisdom, and vision are irreplaceable and must guide AI deployment
EXPLANATION
Sandhya asserts that core human attributes—creativity, wisdom, foresight—cannot be replicated by AI and should steer technology adoption. She links these qualities to the need for human‑centric policy and governance.
EVIDENCE
She emphasizes that creativity, wisdom, and vision are central to technology disruption and that human ingenuity will continue to drive AI advances, arguing that these traits are essential for responsible AI deployment ([280-285]).
MAJOR DISCUSSION POINT
Human creativity, wisdom, and vision are irreplaceable and must guide AI deployment
Argument 5
Watching and waiting is not an option; proactive leadership and re‑imagined training are needed
EXPLANATION
Sandhya argues that passive observation of AI’s impact is unacceptable; instead, decisive leadership, policy integration, and revamped workforce training are required to stay ahead of rapid change. She calls for immediate action rather than doom‑laden paralysis.
EVIDENCE
She states that waiting is not viable, urging the need for good leaders, policy at all levels, platform-embedded policy, and extensive re-imagining of work and training ([355-360]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Commentary underscores that the rapid pace of AI change requires immediate, proactive leadership and training reforms, warning that passive observation will be ineffective [S12] and [S11].
MAJOR DISCUSSION POINT
Watching and waiting is not an option; proactive leadership and re‑imagined training are needed
AGREED WITH
Sabina Dewan, Julie Delahanty
A
Anurag Behar
5 arguments152 words per minute2047 words807 seconds
Argument 1
Coding efficiency could lead to inevitable IT job losses
EXPLANATION
Anurag questions whether AI tools that automate 50‑70 % of coding will inevitably result in reduced hiring and job losses in the IT sector. He frames this as a straightforward, commonsense concern about labor displacement.
EVIDENCE
He asks whether the ease of coding through AI, which can handle a large share of programming tasks, will inevitably cause IT job losses and reduced hiring ([66-70]).
MAJOR DISCUSSION POINT
Coding efficiency could lead to inevitable IT job losses
DISAGREED WITH
Sandhya Ramachandran Arun
Argument 2
Governments must design labor‑market safeguards and learn from cross‑country experiences
EXPLANATION
Anurag calls on governments to create labor‑market protections that mitigate AI‑driven disruptions, drawing on lessons from multiple countries. He stresses the need for policies that balance AI benefits with worker security.
EVIDENCE
He asks Julie how governments can responsibly govern AI to minimise labour-market disruption, seeking cross-country lessons and safeguards ([112-118]; [227-232]).
MAJOR DISCUSSION POINT
Governments must design labor‑market safeguards and learn from cross‑country experiences
Argument 3
Automation threatens design, marketing, and research‑assistant roles beyond IT
EXPLANATION
Anurag expands the concern beyond coding, noting that AI is also making design, marketing, and research‑assistant tasks easier, potentially displacing workers in those fields as well. He asks whether job loss is inevitable across these sectors.
EVIDENCE
He raises the point that AI tools are making coding easier and questions if this will lead to IT job loss, then extends the question to design, marketing, and research-assistant roles ([68-73]).
MAJOR DISCUSSION POINT
Automation threatens design, marketing, and research‑assistant roles beyond IT
Argument 4
Human empathy and understanding are the differentiators that technology cannot replicate
EXPLANATION
Anurag highlights that qualities such as empathy, care, and human understanding are uniquely human and will remain essential even as AI advances. He suggests these traits will protect certain jobs from automation.
EVIDENCE
He references Sandhya’s comment about the importance of human care and wisdom, and later reiterates that jobs requiring wisdom, empathy, and human understanding are the hardest to replace ([265-268]; [398-409]).
MAJOR DISCUSSION POINT
Human empathy and understanding are the differentiators that technology cannot replicate
AGREED WITH
Sandhya Ramachandran Arun, Julie Delahanty
Argument 5
Balancing optimism (boomer) with caution (doomer) underscores the need for swift, decisive measures
EXPLANATION
Anurag describes his internal conflict between optimism about technology (boomer) and caution about its risks (doomer), concluding that this tension calls for rapid, decisive policy action to manage AI’s impact on work.
EVIDENCE
He explicitly labels himself as a “boomer” and a “doomer,” explaining that his role requires both optimism and caution, which together point to the need for swift measures ([370-373]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions on the tension between fostering innovation and ensuring regulation stress the need for rapid, decisive policy action to manage AI’s impact [S12] and [S17].
MAJOR DISCUSSION POINT
Balancing optimism (boomer) with caution (doomer) underscores the need for swift, decisive measures
Agreements
Agreement Points
AI will transform work, creating new roles such as AI‑manager and retaining human strategic, creative, and empathetic functions rather than causing wholesale displacement
Speakers: Sandhya Ramachandran Arun, Julie Delahanty, Anurag Behar
Automation of coding creates “AI‑manager” roles rather than pure displacement AI reshapes ways of working across sectors, not just causing job loss Human empathy and understanding are the differentiators that technology cannot replicate
All three speakers agree that AI will change job roles – coding can be handed to AI but requires human oversight (AI-manager) [84-88], AI will augment marketing, finance and healthcare while strategic planning stays human [93-99][104-106], and uniquely human qualities such as empathy and wisdom will remain essential [265-268].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with research emphasizing AI as a tool for augmenting existing roles rather than wholesale replacement, as highlighted in discussions on worker empowerment and collaboration-focused AI strategies [S45][S47][S48][S43].
Immediate policy action is required; waiting for more evidence will be too late
Speakers: Sabina Dewan, Sandhya Ramachandran Arun, Julie Delahanty
Immediate policy action is required Watching and waiting is not an option; proactive leadership and re‑imagined training are needed Ongoing evidence gathering and learning are vital to balance innovation with regulation
Sabina stresses that waiting for empirical evidence would be “way too late” [173-176], Sandhya says watching and waiting is not viable and calls for proactive leadership and training [355-360], while Julie emphasizes the need for continuous evidence to inform regulation [241-246].
POLICY CONTEXT (KNOWLEDGE BASE)
The call for immediate policy action reflects the recognized tension between rapid AI advances and the need for timely regulation, a theme noted in multiple policy briefs urging guardrails despite limited evidence [S35][S34][S32][S33][S36].
Strong institutions, research, and evidence are essential for responsible AI governance
Speakers: Julie Delahanty, Sabina Dewan, Sandhya Ramachandran Arun
Strong regulatory and labor institutions, human‑centric co‑creation, and evidence from AI4D research are essential Need for competition policy, antitrust, tax reform, universal social protection, and skill systems Policy guardrails, leadership, and governance mechanisms must accompany AI development
Julie highlights the need for robust regulatory and labor institutions and AI4D research data [130-138], Sabina calls for competition policy, antitrust, tax and social protection reforms [320-334], and Sandhya stresses policy guardrails and leadership for AI [269-293].
POLICY CONTEXT (KNOWLEDGE BASE)
Emphasis on strong institutions and evidence-based governance mirrors calls for robust regulatory bodies and research capacity to oversee AI impacts, as outlined in governance frameworks for AI labor effects [S41][S36].
Comprehensive skill development and capacity building are crucial to address AI‑driven labour changes
Speakers: Sabina Dewan, Julie Delahanty, Sandhya Ramachandran Arun
Need for competition policy, antitrust, tax reform, universal social protection, and skill systems Ongoing evidence gathering and learning are vital to balance innovation with regulation Watching and waiting is not an option; proactive leadership and re‑imagined training are needed
Sabina emphasizes the urgency of skill systems alongside other policy levers [324-334], Julie notes that evidence-based policy includes skill development as part of labour safeguards [322-326], and Sandhya calls for re-imagined training and workforce up-skilling [363-364].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of skill development is echoed in panels on AI-driven workforce strategies that stress upskilling and capacity building to harness AI benefits [S46][S45].
AI augments sectors such as marketing, finance and healthcare while humans retain strategic and oversight roles
Speakers: Sandhya Ramachandran Arun, Julie Delahanty
AI augments marketing, finance, and healthcare while humans retain strategic and oversight functions AI reshapes ways of working across sectors, not just causing job loss
Sandhya explains that AI takes over routine processing in marketing, finance and healthcare but strategic planning, execution oversight and wisdom remain human responsibilities [93-99][104-106], and Julie stresses that AI is prompting a broader transformation in work practices across sectors [341-345].
POLICY CONTEXT (KNOWLEDGE BASE)
Examples of AI augmenting marketing, finance and healthcare while humans keep strategic oversight are documented in case studies of AI integration across business functions and augmentation-focused policy recommendations [S45][S47][S43].
Similar Viewpoints
Both argue that robust institutional frameworks and evidence‑based policy are essential to manage AI’s labour impacts [320-334][130-138].
Speakers: Sabina Dewan, Julie Delahanty
Need for competition policy, antitrust, tax reform, universal social protection, and skill systems Strong regulatory and labor institutions, human‑centric co‑creation, and evidence from AI4D research are essential
Both emphasize that uniquely human qualities such as creativity, wisdom and empathy cannot be replaced by AI and should guide its deployment [280-285][265-268].
Speakers: Sandhya Ramachandran Arun, Anurag Behar
Human creativity, wisdom, vision are irreplaceable and must guide AI deployment Human empathy and understanding are the differentiators that technology cannot replicate
Both see AI as reshaping work practices and augmenting existing roles rather than simply eliminating jobs [93-99][104-106][341-345].
Speakers: Sandhya Ramachandran Arun, Julie Delahanty
AI augments marketing, finance, and healthcare while humans retain strategic and oversight functions AI reshapes ways of working across sectors, not just causing job loss
Unexpected Consensus
Urgent need for regulation and policy guardrails despite differing professional backgrounds (tech vs labour research)
Speakers: Sabina Dewan, Sandhya Ramachandran Arun
Immediate policy action is required Watching and waiting is not an option; proactive leadership and re‑imagined training are needed
Although Sandhya, as a technology executive, might be expected to adopt a more optimistic stance, she aligns with Sabina’s call for immediate regulatory action, highlighting a shared urgency across sectors [173-176][355-360].
POLICY CONTEXT (KNOWLEDGE BASE)
Urgent need for clear regulatory guardrails is supported by industry and policy analyses that argue regulation can accelerate innovation and reduce uncertainty, underscoring the demand for coordinated policy frameworks [S32][S33][S34][S36][S39].
Overall Assessment

The panel broadly concurs that AI will significantly reshape labour markets, creating new roles that require human oversight, creativity and empathy, while also posing risks of displacement and precarity. There is strong consensus on the necessity of immediate, evidence‑based policy action, robust institutions, and extensive skill development to manage these changes.

High consensus on the need for proactive governance, institutional strength and capacity building; moderate consensus on the extent of job displacement versus augmentation.

Differences
Different Viewpoints
Extent of AI‑driven job losses versus AI‑augmented role evolution
Speakers: Sabina Dewan, Sandhya Ramachandran Arun
Evidence of AI‑driven layoffs and gig‑economy harms Automation of coding creates “AI‑manager” roles rather than pure displacement
Sabina points to current mass layoffs in big-tech firms and to algorithmic management of gig workers as clear evidence that AI is already causing large-scale job losses and new forms of precarity ([152-158][160-166]). Sandhya counters that in her experience the technology is not displacing workers; instead it creates new oversight roles (e.g., junior developers become managers of AI tools) and augments functions in marketing, finance and healthcare while strategic and decision-making tasks remain with humans ([60-63][84-88][93-99]).
POLICY CONTEXT (KNOWLEDGE BASE)
Debate over the scale of AI-induced job losses versus role evolution is reflected in literature on labor market disruption, which notes both potential displacement of entry-level jobs and historical patterns of job transformation [S40][S49][S45][S47].
Urgency of regulatory and policy action on AI‑induced labor disruption
Speakers: Sabina Dewan, Julie Delahanty
Immediate policy action is required; waiting for more evidence will be too late Ongoing evidence gathering and learning are vital to balance innovation with regulation
Sabina argues that waiting for more empirical data would miss the window for effective intervention, calling for swift, decisive policy measures now ([173-176]). Julie stresses that the field is still learning, that robust evidence and tools like the Global Index on Responsible AI are needed before codifying regulations, and that a balance between innovation and safety must be continuously refined ([241-246]).
POLICY CONTEXT (KNOWLEDGE BASE)
Disagreement on regulatory urgency corresponds to the broader policy discussion about acting now versus waiting for more data, a tension highlighted in recent AI policy roadmaps [S35][S34][S32].
Whether AI‑driven coding efficiency will inevitably eliminate IT jobs
Speakers: Anurag Behar, Sandhya Ramachandran Arun
Coding efficiency could lead to inevitable IT job losses Automation of coding creates “AI‑manager” roles rather than pure displacement
Anurag asks if the ability of AI tools to perform 50-70 % of coding will inevitably reduce hiring and lead to job losses in the IT sector ([66-70]). Sandhya replies that while AI can write code, success still depends on human oversight of design, architecture and security, turning junior developers into managers of AI rather than eliminating their roles ([84-88]).
POLICY CONTEXT (KNOWLEDGE BASE)
The question of coding efficiency eliminating IT roles is addressed by industry observations that automation reshapes rather than eradicates technical work, shifting focus to higher-level strategic tasks [S48][S49].
Scope and focus of policy levers to mitigate AI’s labor impact
Speakers: Sabina Dewan, Julie Delahanty
Need for competition policy, antitrust, tax reform, universal social protection, and skill systems Strong regulatory and labor institutions, human‑centric co‑creation, and evidence from AI4D research are essential
Sabina calls for a broad suite of levers-including competition policy, antitrust, varied tax tools, universal social protection and comprehensive skill systems-to address AI-driven disruptions ([320-334]). Julie emphasizes the necessity of strong regulatory and labor institutions, co-creation with workers, and large-scale evidence-gathering (e.g., AI4D research) to inform policy, without specifying the same set of fiscal or competition measures ([130-138]).
POLICY CONTEXT (KNOWLEDGE BASE)
Discussion of policy levers aligns with proposals to adapt competition, labor, tax and social protection policies to AI’s labor impact, as outlined in comprehensive AI governance recommendations [S39][S41].
Unexpected Differences
Anurag’s concern about inevitable IT job losses versus Sandhya’s optimistic view of no displacement
Speakers: Anurag Behar, Sandhya Ramachandran Arun
Coding efficiency could lead to inevitable IT job losses Automation of coding creates “AI‑manager” roles rather than pure displacement
Given Anurag’s role as a foundation leader with a financial stake in a major tech company, his expectation that AI will inevitably cut IT jobs ([66-70]) contrasts sharply with Sandhya’s claim that the sector is not experiencing displacement and that new AI-manager roles are emerging ([84-88]). This tension was not anticipated based on their institutional positions.
POLICY CONTEXT (KNOWLEDGE BASE)
The contrasting views mirror ongoing debates where some leaders frame job displacement as a misperception and emphasize augmentation, while others warn of potential losses, as seen in recent policy dialogues [S50][S45][S49].
Overall Assessment

The panel shows clear divisions on how severe AI‑driven job losses are and how quickly policy should respond. Sabina stresses immediate, wide‑ranging reforms backed by observable layoffs, while Sandhya and Julie adopt a more nuanced view that AI primarily augments work and that robust institutions and evidence‑gathering should guide policy. Anurag’s questions highlight practical concerns about coding automation, further exposing the split between urgency and optimism.

High – The speakers diverge on both the magnitude of labor disruption and the pace and nature of policy response, which could hinder coordinated action on AI governance and labour protection. Consensus exists on the need for institutions and human‑centric safeguards, but disagreement on urgency and specific levers may delay effective interventions.

Partial Agreements
All three agree that AI’s impact on labour markets requires strong institutions, governance frameworks and proactive policy. Sabina stresses urgent systemic reforms, Julie highlights the need for robust institutions and evidence‑based policy, and Sandhya calls for policy guardrails, leadership and embedded governance mechanisms ([130-138][269-293][173-176]). The divergence lies in the emphasis on urgency versus a more measured, evidence‑driven approach.
Speakers: Sabina Dewan, Julie Delahanty, Sandhya Ramachandran Arun
Evidence of AI‑driven layoffs and gig‑economy harms Strong regulatory and labor institutions, human‑centric co‑creation, and evidence from AI4D research are essential Policy guardrails, leadership, and governance mechanisms must accompany AI development
Takeaways
Key takeaways
AI is already causing efficiency gains that translate into workforce reductions, especially in the gig economy and large tech firms. Automation of coding and other tasks will reshape, not eliminate, IT roles – creating new ‘AI‑manager’ or oversight positions. Across sectors (marketing, finance, healthcare) AI augments human work; strategic, creative, and supervisory functions remain human‑centric. The impact on labor markets is urgent; waiting for more evidence will likely be too late. Effective governance requires strong labor, competition, antitrust, and tax institutions, universal social protection, and robust skill‑development systems. Human creativity, wisdom, empathy, and values must guide AI deployment; co‑creation with workers and communities is essential. Evidence‑based tools such as the Global Index on Responsible AI can help governments design appropriate policies. The challenges are especially acute in the Global South where formal employment is scarce and precarity is high.
Resolutions and action items
Develop and implement policy guardrails (competition policy, antitrust, tax reforms, universal social protection) to mitigate AI‑driven labor disruptions. Invest in and strengthen labor‑market institutions and research ecosystems to monitor AI’s impact in real time. Adopt human‑centric co‑creation processes for AI systems, involving workers, employers, and communities. Create and scale up skill‑development and reskilling programs that align with the evolving ‘AI‑manager’ roles, especially for low‑skill workers. Utilize the Global Index on Responsible AI to benchmark and guide country‑level regulatory actions. Encourage companies (e.g., Wipro) to continue internal learning modules and role‑persona redesigns that incorporate AI oversight.
Unresolved issues
Specific design of tax and competition policies that can effectively capture AI‑generated gains and redistribute them. How to provide universal social protection and health coverage for the large informal and self‑employed workforce in India and similar contexts. Concrete mechanisms for upskilling workers with minimal formal education or literacy to participate in an AI‑driven economy. Detailed regulatory frameworks for algorithmic management in the gig economy and mechanisms for worker redress. Balancing rapid AI innovation with the need for evidence‑based regulation without stifling growth. Addressing the reported cognitive decline and mental‑health impacts linked to AI‑mediated work and education.
Suggested compromises
Recognize that AI will both displace certain tasks and create new oversight/creative roles, balancing optimism about new opportunities with caution about job losses. Adopt a proactive but evidence‑informed policy approach: act now on known risks while continuing to gather data to refine regulations. Encourage companies to redesign hiring criteria (learnability, adaptability) rather than eliminating hiring, thereby mitigating displacement while leveraging AI efficiencies. Promote a ‘boomer‑doomer’ balance: combine forward‑looking technological optimism with realistic preparation for social and labor impacts.
Thought Provoking Comments
When you talk to companies privately, they will own up to anywhere between 30 % to 40 % time‑saving, which then translates into significant workforce cuts. AI systems are enabling surveillance, influencing who gets work, and grossly exacerbating inequality.
She reveals a hidden corporate narrative that contradicts public statements, highlighting the scale of efficiency gains and their direct link to job losses and inequality, thereby framing AI as an urgent labor‑market threat.
Set the tone for the whole panel, prompting the moderator to ask about regulation and prompting other speakers (especially Julie and Sandhya) to address the need for proactive policy rather than waiting for evidence.
Speaker: Sabina Dewan
Coding can be completely handed off to an AI agent, but the success of that code depends on a human overseeing design, architecture, security and delegating the work – turning junior developers into managers of AI rather than being displaced.
She reframes the narrative of automation from outright job loss to a transformation of job roles, introducing the concept of ‘AI‑managed’ work and emphasizing the continued need for human judgment.
Shifted the conversation from doom‑laden predictions to a more nuanced view of job redesign, leading Anurag to probe the ‘human and wisdom’ aspect and prompting Julie to discuss skill development and governance.
Speaker: Sandhya Ramachandran Arun
When we think about new technologies, we must make it very human‑centric – co‑creating AI with workers, communities and employers so we can enhance job quality and productivity rather than increase inequality.
She introduces the principle of participatory design and human‑centric AI, moving the debate from technical possibilities to concrete governance and inclusion strategies.
Guided the discussion toward institutional responsibilities, influencing Sabina’s later call for competition policy, tax reforms and universal social protection, and setting up the later mention of the Global Index on Responsible AI.
Speaker: Julie Delahanty
We need to look beyond the quantity of jobs lost or created and examine the impact on the quality of work – for example, algorithmic management in the gig economy leaves workers with no redress when platforms make decisions.
She expands the analysis from headcount to job quality, highlighting a structural issue in platform economies that is often overlooked in AI debates.
Prompted deeper discussion on labor rights and regulatory gaps, reinforcing Julie’s point about the need for strong labor institutions and influencing Sandhya’s later remarks about policy and governance.
Speaker: Sabina Dewan
In India only about 10 % of employment is formal, so a disruption to formal jobs affects a huge proportion of the economy because those jobs support informal sectors, housing, consumption, etc.
She turns a seemingly dismissive statistic into a powerful argument about systemic vulnerability and cascading economic effects, underscoring why AI impacts matter even in largely informal economies.
Created a turning point that deepened the conversation about regional disparities, leading Anurag to ask for cross‑country lessons and prompting Julie to cite the Global Index as a tool for emerging economies.
Speaker: Sabina Dewan (response to Anurag’s tongue‑in‑cheek remark)
We have the AI‑Global Index on Responsible AI that covers 138 countries and includes a dedicated focus on labor protection and the right to work, providing comparable data for policymakers.
She introduces a concrete, evidence‑based instrument that can bridge the gap between abstract concerns and actionable policy, highlighting the role of data in responsible AI governance.
Shifted the dialogue from problem‑identification to solution‑orientation, giving participants a tangible resource to reference and reinforcing Sabina’s call for evidence‑based regulation.
Speaker: Julie Delahanty
AI is attacking the very foundation of education – teachers and students are outsourcing their thinking, leading to cognitive decline and forcing a return to paper‑and‑pencil exams.
He broadens the scope of the discussion beyond labor markets to the core of human cognition and learning, presenting a stark, personal‑impact perspective on AI’s societal reach.
Prompted a reflective closing from multiple panelists, reinforcing the urgency expressed earlier and linking the need for human‑centric policies to the preservation of critical societal functions like education.
Speaker: Anurag Behar
Re‑thinking how we work – not just job loss but a complete shift in ways of working – is perhaps the bigger issue we face with AI.
She reframes the debate from a narrow focus on employment numbers to a broader transformation of work practices, highlighting the need for organizational and cultural adaptation.
Served as a concluding pivot that synthesized earlier points about skill development, governance, and human‑centric design, and was affirmed by Sabina as a foundational issue.
Speaker: Julie Delahanty
Overall Assessment

The discussion was driven by a series of pivotal remarks that moved the conversation from alarmist predictions of job loss to a layered analysis of job quality, systemic vulnerability, and the necessity of human‑centric governance. Sabina’s early disclosure of corporate admissions and the inequality angle forced the panel to confront the urgency of regulation. Sandhya’s nuanced view of AI‑augmented roles and Julie’s emphasis on participatory design and evidence‑based tools introduced constructive pathways forward. Subsequent interventions—especially Sabina’s focus on gig‑economy redress, the clarification about India’s formal sector, and the introduction of the Global Index—deepened the debate, linking macro‑policy, institutional capacity, and cross‑country learning. Anurag’s final remarks on education broadened the stakes, underscoring AI’s pervasive societal impact. Collectively, these comments reshaped the tone from speculative dread to a pragmatic call for coordinated policy, skill development, and safeguards that keep humanity at the centre of AI’s evolution.

Follow-up Questions
Can we afford to wait for more evidence on AI’s impact on jobs, or should we act now with regulations and social institutions?
Addresses the urgency of policy response to AI‑driven labor market changes.
Speaker: Sabina Dewan
What is the trajectory of AI technology and which specific jobs will be displaced versus created, and what dynamics drive these changes?
Seeks understanding of how AI will reshape employment across sectors.
Speaker: Anurag Behar
Given that AI can automate a large portion of coding, will IT sector jobs inevitably be lost, and what is the impact on other industries like design, marketing, and research?
Probes sector‑specific job displacement and broader cross‑industry effects.
Speaker: Anurag Behar
How can governments and institutions govern AI responsibly to minimize labor market disruption and ensure a smooth transition?
Looks for policy frameworks that protect workers while allowing innovation.
Speaker: Anurag Behar
What lessons from different countries show how AI can create opportunities without deepening inequality?
Seeks cross‑country best practices for inclusive AI deployment.
Speaker: Anurag Behar
Can you elaborate on the role of ‘human wisdom’ in AI‑augmented work and how it changes job design?
Explores the human‑centric aspects of AI and how they affect future job roles.
Speaker: Anurag Behar
What concrete actions should be taken now to address AI’s impact on labor markets?
Requests actionable recommendations for immediate policy and institutional interventions.
Speaker: Anurag Behar
How should we rethink ways of working and workplace structures in light of AI‑driven disruption?
Addresses broader transformation of work practices beyond simple job loss.
Speaker: Anurag Behar
How would you respond to Sabina’s concerns about precarity, regulation, and social protection in the AI era?
Seeks the technology sector’s perspective on policy and social safety‑net measures.
Speaker: Anurag Behar
Empirical measurement of AI‑driven efficiency gains and associated layoffs across sectors, especially in the gig economy and algorithmic management.
Needed to quantify AI’s impact on employment quantity and quality for evidence‑based policy.
Speaker: Sabina Dewan
Impact of AI on job quality, including worker rights, gig platform governance, and algorithmic bias.
Highlights the need to study how AI affects working conditions, not just headcount.
Speaker: Sabina Dewan
Effectiveness of competition policy, antitrust, tax policy, and labor‑law reforms in mitigating AI‑induced labor market shocks.
Calls for research on which policy levers can best cushion AI disruption.
Speaker: Sabina Dewan
Research on cognitive decline among youth linked to AI usage and its implications for employability.
Emerging health/education issue that could affect future labor supply.
Speaker: Sabina Dewan
Development of robust institutional capacity (regulatory, labor, research) to monitor AI’s labor market effects.
Institutional readiness is crucial for timely and effective governance.
Speaker: Julie Delahanty
Human‑centric co‑creation processes with workers, employers, and communities to shape AI systems.
Ensures AI design aligns with worker needs and reduces inequality.
Speaker: Julie Delahanty
Large‑scale data collection (household, firm, worker) to track AI’s real‑world labor impacts, as done in sub‑Saharan Africa.
Provides the evidence base needed for informed policy decisions.
Speaker: Julie Delahanty
Evaluation of the Global Index on Responsible AI as a tool for comparative policy analysis across 138 countries.
Helps governments benchmark and improve AI governance related to labor.
Speaker: Julie Delahanty
Future of Work research on how AI reshapes work practices, workplace design, and employee adaptation.
Explores the broader shift in how work is organized and performed.
Speaker: Julie Delahanty
Effectiveness of universal social protection systems in cushioning AI‑induced job transitions.
Investigates safety‑net mechanisms needed for displaced workers.
Speaker: Sabina Dewan, Julie Delahanty
Training and reskilling models that align with AI‑augmented roles, especially for low‑skill populations in India.
Addresses the skill gap and feasibility of upskilling large informal workforce.
Speaker: Sabina Dewan
Assessment of AI’s role in healthcare, finance, and other sectors for augmenting professionals versus replacing them.
Sector‑specific analysis to understand where AI adds value versus displaces jobs.
Speaker: Sandhya Ramachandran Arun
Understanding how AI‑driven content generation in marketing affects strategic versus execution tasks.
Clarifies which marketing functions are likely to be automated and which remain human‑centric.
Speaker: Sandhya Ramachandran Arun
Governance mechanisms for AI in platform economies to ensure redressal for gig workers affected by algorithmic management.
Seeks policy solutions for algorithmic accountability and worker protection in the gig sector.
Speaker: Sabina Dewan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Population-Scale Digital Public Infrastructure for AI

Building Population-Scale Digital Public Infrastructure for AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed how to build and scale digital public infrastructure (DPI) for AI diffusion pathways, emphasizing the need for rapid, safe, and inclusive deployment across societies [38-41]. Nandan Nilekani illustrated this by describing a farmer-focused app that took nine months to launch in Maharashtra, was replicated in Ethiopia in three months, and adapted for dairy farmers in three weeks, showing how lived experience can dramatically shorten implementation timelines [4-12]. He announced an ambition to create 100 diffusion pathways by 2030, backed by a global coalition that includes Anthropic, Google, the Gates Foundation, UNDP and welcomes any participant [15-27]. Shankar Maruwada defined diffusion pathways as shared “rails” that compress learning curves, costs and risks, enabling safe, large-scale AI impact across sectors and countries [42-47]. Irina Ghose stressed that successful diffusion requires contextual language, integration into everyday workflows, and iterative refinement, citing Anthropic’s work on multilingual models for ten Indian languages as a concrete example [60-62][66-71]. She also introduced Anthropic’s Model Context Protocol (MCP), a universal adapter that lets AI tools be built once and reused across domains, likening it to UPI for payments [250-254]. Trevor Mundeli warned that fragmented pilots hinder scaling and proposed “scaling hubs” in India and Africa to aggregate funding and expertise, arguing that such hubs can overcome barriers to population-scale deployment [84-99]. He noted that without coordinated diffusion many well-intentioned pilots remain isolated and fail to scale [90-96]. Esther Dweck described Brazil’s Ministry of Management and Innovation, which is reforming procurement, digital infrastructure, and data governance to enable AI, including a new AI R&D program called INSPIRE that brings together state-owned and private actors [122-130][196-204]. She emphasized outcome-oriented procurement, sovereign digital identity platforms (gov.br), and the appointment of chief data officers to break data silos and support AI services [128-143][147-161]. On political and economic challenges, she highlighted digital sovereignty and the need to secure data and services domestically while addressing wealth distribution from automation [286-295][300-307]. The panel agreed that safety, auditability, and transparent governance are essential, with Anthropic’s research on model explainability cited as a step toward trustworthy health applications [274-282]. The discussion concluded that by 2030 the network of diffusion pathways should transform DPI into “digital public intelligence,” making AI an invisible, ubiquitous public good [315-317].


Keypoints


Major discussion points


Diffusion pathways as the strategic framework for scaling AI for public good – Nandan announced an ambition to create 100 diffusion pathways by 2030 and described them as “ways of reaching the goal faster” that can be reused across countries [15-20]. Shankar clarified that diffusion pathways are not just awareness but the spread of know-how, trust and institutional capability that enable safe, large-scale AI impact [41-46]. The coalition of governments, foundations and companies (Anthropic, Google, Gates Foundation, UNDP) is meant to develop and share these pathways [22-27].


Institutional reforms needed inside governments – Esther Dweck explained that existing procurement practices (focus on lowest price and risk) hinder innovation; the ministry is shifting to a policy-oriented, outcome-focused procurement mindset and encouraging “innovation procurement” that accepts failure [124-133][138-144]. She also highlighted the need for robust digital infrastructure (digital ID, gov.br platform) and strong data-governance, including chief data officers and a new data-governance decree [145-151][158-162].


Coordinated “scaling hubs” to overcome fragmentation – Trevor described the creation of scaling hubs in India and Africa that pool funding, aggregate pilots and provide a government-level point of contact, thereby reducing the fragmentation that currently blocks population-scale deployment [84-99].


Design principles for AI diffusion: localisation, workflow integration and reusable interfaces – Irina emphasized three prerequisites for diffusion: (1) local language support, (2) embedding AI into existing daily workflows, and (3) an iterative, continuously-improved approach [60-62]. She also introduced Anthropic’s Model Context Protocol (MCP) as a universal “language” that lets developers build once and deploy across sectors, similar to how UPI standardized payments [250-254].


Safety, auditability and transparency as non-negotiable for high-stakes applications – Trevor warned that AI systems used in health must be auditable and transparent; black-box recommendations are insufficient, and mechanisms to trace model reasoning are essential for trust and regulatory compliance [274-282].


Overall purpose / goal of the discussion


The panel convened to define how the global community can build, share and scale digital public infrastructure (DPI) powered by AI, turning isolated pilots into durable public services. By establishing reusable diffusion pathways, aligning procurement and governance reforms, and ensuring safety and localisation, the participants aim to achieve the collective target of 100 AI diffusion pathways by 2030, thereby delivering inclusive, positive impact across agriculture, health, education, and other public sectors.


Tone of the discussion


The conversation began with an upbeat, visionary tone-celebrating rapid rollout successes (nine months → three weeks) and the ambitious 100-pathway target. As the dialogue progressed, it shifted to a more analytical and problem-solving tone, addressing concrete challenges in procurement, data governance, and fragmentation. When safety and political-economic concerns were raised, the tone became cautiously serious, emphasizing the need for auditability and digital sovereignty. Throughout, the tone remained collaborative and forward-looking, ending on a hopeful note about collective action and the eventual “boring” ubiquity of AI.


Speakers

Nandan Nilekani – Co-founder and Chairman of Infosys Technologies Ltd; Founder of Aadhaar (UIDAI); AI thought leader speaking on diffusion pathways for AI. [S16][S17]


Speaker 1 – Event host/moderator who introduced the panelists and managed the session flow.


Shankar Maruwada – Moderator of the panel discussion on building and scaling digital public infrastructure for AI.


Irina Ghose – Managing Director, Anthropic India; over three decades in IT and AI deployment; expertise in model building and AI diffusion. [S10][S12]


Trevor Mundeli – President, Bill & Melinda Gates Foundation (global health); expertise in scaling health and agricultural AI pilots. [S4][S5]


Esther Dweck – Minister of Management and Innovation in Public Services, Brazil; focuses on digital public infrastructure, procurement reform, and data governance. [S1][S2]


Additional speakers:


Mr. Om Birlaji – Chief Guest; Speaker of Parliament of India.


Mr. Martin Chongungji – Secretary General, Inter-Parliamentary Union (IPU).


Mr. Laszlo Z – Deputy Speaker, Parliament of Hungary.


Dr. Chinmay Pandya – Representative, All World Gayatri Parivar.


Ms. Jimena – Participant (no further details provided).


Full session reportComprehensive analysis and detailed insights

Nandan Nilekani opened the session by describing a farmer-focused mobile application that now serves 2.5 million users, giving them real-time price and weather information and even monitoring dairy cows for lactation status [1-12]. He noted that the first rollout in Maharashtra took nine months, followed by a three-month replication in Ethiopia and a three-week adaptation for Amul’s dairy-farmer programme [4-12]. From this experience he coined the term “pathways” – reusable, experience-based routes that let others reach the same goal far more quickly – and announced an ambitious global target of creating 100 diffusion pathways by 2030, backed by a newly formed coalition that includes Anthropic, Google, the Gates Foundation, UNDP and other partners, and is open to any additional member, announced “yesterday or day before yesterday” [15-27][S58].


Shankar Maruwada placed the discussion in a broader historical context, noting that the decisive factor in past industrial revolutions was not superior invention but the diffusion of know-how, trust and institutional capability [38-43]. He defined diffusion pathways as “shared rails” that compress learning curves, costs and risks, enabling safe, large-scale AI impact across sectors and countries rather than being a single platform or app [44-47]. This framing set the stage for the panel’s deeper exploration of how such rails can be built and operationalised.


Irina Ghose stressed that successful diffusion hinges on three practical prerequisites: (1) localisation into the user’s language, (2) seamless integration into existing daily workflows, and (3) an iterative, continuously-improved deployment model [60-62]. She illustrated these points with Anthropic’s work on multilingual models for ten Indian languages, arguing that language support directly expands the set of viable use-cases [66-71]. To avoid rebuilding AI components for each domain, Irina described the Model Context Protocol (MCP), a universal “adapter” that allows developers to create a model once and then plug it into any downstream application, likening it to UPI’s role in standardising digital payments [250-254].


Trevor Mundeli identified fragmentation of pilots as a major barrier to population-scale impact. He described the creation of “scaling hubs” in India and several African nations (Rwanda, Nigeria, Senegal, soon Kenya) that pool funding, aggregate disparate pilots and provide a single government-level point of contact, thereby turning a chaotic landscape of small projects into coordinated, fundable programmes [84-99]. He argued that without such hubs, the multitude of isolated pilots cannot achieve the critical mass needed for national rollout [90-96].


Esther Dweck outlined the institutional reforms required within governments to make diffusion pathways work. Her Ministry of Management and Innovation in Public Service (MGI) in Brazil is shifting procurement from a “lowest-price, lowest-risk” mindset to an outcome-oriented, policy-focused approach that tolerates managed risk and encourages “innovation procurement” where failure is accepted as part of learning [124-133][138-144]. She highlighted the importance of robust digital infrastructure-specifically a national digital-ID system and the gov.br service platform-as the backbone for AI-enabled personalised services [145-149]. A new AI R&D programme, INSPIRE (AI for Public Service with Innovation, Responsibility and Ethics), creates a joint institutional arrangement among state-owned firms, private companies and the government to develop AI platforms [196-204]. A forthcoming data-governance decree will appoint chief data officers in every ministry to break data silos and ensure sovereign data handling [207-218]. Brazil’s strategy for data localisation is reinforced by two federal, state-owned companies that run resident clouds, supporting digital sovereignty [292-298]. Moreover, Brazil passed a child-online-protection law requiring age verification for internet users, and the government is piloting a “verifiable convention” to implement this requirement [290-304]. Capacity-building is also central: four training tracks target managers, IT experts, data stewards and general civil servants to instil a “digital mind” across the public service workforce [220-228].


Safety and auditability emerged as non-negotiable requirements for high-stakes applications. Trevor warned that black-box health recommendations are almost never adequate; AI systems must be transparent and auditable so clinicians can trace the reasoning behind a suggestion, mirroring the accountability expected of human practitioners [274-282]. Shankar reinforced this tension by asking where the line should be drawn between the rapid pursuit of 100 pathways and the need for rigorous safety safeguards when lives are at stake [265-267], to which Trevor responded that urgency does not excuse lax standards and that India’s DPI stack offers a promising test-bed for safe, frugal innovation [267-273].


Across the discussion, the participants emphasized different aspects rather than outright disagreement. Shankar’s vision of decentralized “shared-rail” pathways contrasted with Trevor’s hub-centric scaling model, reflecting a tension between standards-based diffusion and centralized aggregation [44-47][84-99]. Irina’s call for an “all-in” iterative rollout implied tolerance for early-stage errors, whereas Esther described civil-servants’ fear of audit-driven penalties and advocated outcome-oriented procurement that still manages risk [60-62][124-133]. A balance between speed and safety was debated, with Shankar urging rapid diffusion and Trevor insisting on auditable, transparent health AI before scaling [265-267][274-281]. Finally, Irina’s promotion of a universal Model Context Protocol appeared at odds with Esther’s emphasis on digital sovereignty, resident clouds and data localisation [250-254][292-298][290-304].


Across the discussion, the participants converged on four overarching pillars: (1) structured diffusion pathways-whether as shared rails, scaling hubs or universal protocols-to compress learning curves and accelerate AI rollout [13-15][44-47][250-254]; (2) localisation (language support) and embedding AI into existing workflows as essential for user adoption [60-62]; (3) safety, auditability and transparency as indispensable, especially for health and other life-critical domains [274-282]; and (4) robust digital infrastructure and data-governance, including sovereign data strategies and capacity-building for civil servants, as foundational enablers of scalable AI diffusion [145-151][158-162][220-228].


In concluding remarks, Shankar projected that by 2030 the collective effort will have turned today’s Digital Public Infrastructure (DPI) into “digital public intelligence”, where AI is as invisible and ubiquitous as UPI is for payments [315-317][172-176]. The panel’s discussion therefore mapped a roadmap-from concrete pilot experiences and institutional reforms to technical standards and safety frameworks-aimed at achieving the 100-pathway target and ensuring that AI delivers inclusive, trustworthy benefits across agriculture, health, education and beyond.


Session transcriptComplete transcript of the session
Nandan Nilekani

bought which farmers use and millions of farmers today, 2 .5 million farmers have downloaded this app. And this was built to make sure that farmers have access to the best information about access to prices, access to weather information and so on. And it’s very sophisticated. It took nine months to get this going in Maharashtra. But we learned a lot about how to do these things. And the next implementation was done in Ethiopia. So in Africa, and Ethiopia did the same thing in three months. So essentially what took us nine months the first time around took us three months. And recently, at the request of the Prime Minister, Amul implemented the whole thing. And Amul implemented it for cows and bought for dairy farmers to understand about the cows and whether they’re lactating or whether they’re, you know, milk and so on.

And that was done in three weeks. So I think you went from nine months to three months to three weeks. So what is the message in that is that if you get the lived experience of implementing these kind of systems for public good, you can actually dramatically reduce the time in which you can do that. And we call these ways of reaching the goal faster, we call them as pathways, because once you have a pathway, then you can get, somebody else can get to the same point quicker. And just like we had this notion that we’ll have 50 in five, 50 countries in five years, we are also now setting an ambitious goal for doing 100 diffusion pathways by 2030.

In other words, by 2030, all of us together across the world will develop these pathways to diffuse the use of AI in a positive way to help farmers, improve the life of young kids, allow people to get jobs through something called Blue Dot. There are so many things going on, but all of them are designed to be effective. to improve and make better people’s lives, can meet the aspirations in a very inclusive way so that everybody is in, nobody is left out. And so we announced a partnership. We announced a coalition of this, of 100 diffusion pathways by 2030. We announced that yesterday or day before yesterday. And we have a global coalition. Anthropic is there. Google is there.

Gates Foundation is there. UNDP is there. A whole host of people are there. And it’s a very open, it’s a big tent. Anybody can join the coalition. But our goal is all of us work together to very, in a focused manner, develop these pathways of diffusion of different kinds of positive AI use cases and then actually make it happen in countries around the world. So just like 15 .5 was a DPI goal, 100 diffusion pathways by 2030 is the AI goal we have. And we are confident that all of us collectively can get there. So I think this is important. I think it’s strategic for the world that we show the good use of AI, and it’s strategic that all of us work together to do that.

Thank you very much.

Speaker 1

Thank you so much, Mr. Nandan. At this point, I would love to invite our panelists up to the stage. We’ll start by taking a quick group photograph together and then begin the discussion. So let me invite Minister Esther Dweck, Mr. Trevor Mundell, Ms. Reena Ghosh, and Mr. Shankar Maruwada, accompanied by Nandan, to be on the stage for a quick group photograph. Thank you. Let me now hand it over to Shankar Maruwala, who will moderate us to the next panel.

Shankar Maruwada

Good afternoon. We have an exciting panel discussion ahead. Let me start off with where Nandan stopped. Hundred pathways. What are these pathways? These are diffusion pathways to AI impact safely and at scale. Let me provide a bit of background. France invented better than Britain in the first industrial revolution yet Britain won that Britain in turn out invented US in steel, Germany out invented US in chemistry yet it’s the US that won the second industrial revolution what was the crucial thing it was not better invention or even innovation the missing ingredient was diffusion which the United States of America did much better diffusing the benefits and the impact of this technology throughout the economy and the society when we say diffusion we don’t mean awareness or access diffusion as Nandan described is the spread of know -how, trust and institutional capability that allows organizations to adopt AI safely and sustainably as he explained Maharashtra was the pioneer to do this in India it’s like Sir Edmund Hillary climbing Mount Everest for the first time he inspires he creates a pathway for others to follow and it would be rather stupid if after he came back he said I am not sharing this with others the pathway I created I have removed it so now you guys find your own pathway the societies that create such pathways allow a whole lot of others to prosper to make progress to create impact inclusively and equitably that is the when Nandan talked about diffusion hundred pathways these are the hundred diffusion pathways across sectors countries continents some are some may be led by proprietary models some may be led by sovereign efforts some may not be it may differ It’s the choice of the AI adopter to decide which pathway works best for them.

So the diffusion infrastructure we are talking about creating isn’t a platform app or model. It’s shared rails that compress learning curves, cost and risk. So that AI can be used by all of society for all of humanity. With that, I would like to begin the panel discussion. Irina, from the model builder’s perspective, what needs to be true for AI to be deployable at population scale? Not just impressive pilots, especially in high -stake public systems. What needs to happen?

Irina Ghose

Thank you so much, Shankar. And absolutely a pleasure and honor to be here with all of you. Thank you so much. The way I think about it is AI deployment would seldom, if ever, have any roadblocks because of a complexity in the model or the performance. The only reason it fails to gain scale is because the perception in our mind about the complexity. And one of the things that we really feel is that you have to be all in, first yourself, diffuse it to people around you to make it happen. Now, if you think about it, in a pilot, you’ve got experts doing it, you’ve got guardrails, you’ve got the intensity of people, and you’ve got a select group.

Now, when that kind of goes and spreads out, you’ve got a teacher in Bihar kind of implementing it, you’ve got a health worker in Coimbatore, you’ve got a small business leader in Indore doing it, who are not into ML, but for them, AI will start having significance when it stops being a scientific tool to something which is as intuitive for them. So three things which come into play. The first one is that for diffusion, it needs to be contextual to the local language that you speak. Second, it needs to be in the workflow of what you’re doing every day and you don’t need to do net new things. And the third is to be, you have to be iterative and be at it to make it happen.

And I’ll give you a small example as to how diffusion is happening. First of all, Shankar, really honored to have worked with X -Step to make it diffused across so many realms of life. And at Anthropic also we said that it’s not a technology for the sake of the technology only in the hands of developers and builders. We found that in India, India happens to be the second largest user base of cloud outside the US. So a big round of applause to all of us out here for making that happen. And what we also felt is that when we are building tools, one of the tools you might have heard is co -work, which earlier used to be done a lot by developers.

But now, people who are information workers or who are just thinking as to how to solve things. The idea is that you do not have to develop code, read a lot of intense things. You can make the tool work for itself. So in my mind, diffusion really means, first, how do I think that everything that I do, I have to be AI first. Second, the ecosystem being in India around myself, I enthuse everybody. And third, how am I giving back to everybody in the last mile to make it happen?

Shankar Maruwada

Fantastic. One of the things I liked about what Anthropic CEO Dario Amadei said is very soon, imagine a country with a whole bunch of geniuses living in data centers. What will that country do? Think about it. But till we reach there, and Dario says in two, three years, but till we reach there, Trevor, as president of Gates Foundation looking at global health, you are dealing with a situation where you’ve seen a whole bunch of, you’ve seen a whole bunch of AI pilots. not too many of them have scaled. From your experience, what separates pilots from systems that have scaled and become institutional? What separates an experiment from a scaled, institutional, sustainable impact?

Trevor Mundeli

Thank you, Shanka. And thank you for the invitation to be on this good panel. And also for the overview you gave me a few days ago of the very good work you’re doing at XSTEP. I learned about Open AgriNet and where that has made progress. But on this issue of scaling of AI, I had an opportunity to, this morning, sit down with the heads of entities which we call scaling hubs. There are two of them here in India, and there are three, soon to be four, in Africa. And there’s also a pan -African venture called Smart Africa. And you might say, well, what are these scaling hubs? So the idea is that we would support a partnership with the governments now in Rwanda, Nigeria, Senegal, and soon to be Kenya, wherein we place funding that the government can use to take the pilots that are out there and to really push them to large scale.

And why would we need a hub like this to do that? Well, one of the big barriers that we are currently seeing is the fragmentation that is occurring out there in terms of many, many ventures, some that we fund, other funders, everything with very good intent. Let’s do a small pilot. Let’s quickly do something over here. Thousands of them occurring out there. You take it at a government level. They have people approaching the Ministry of Agriculture, the Ministry of Education, the Ministry of Health, Ministry of Finance. all of them with different groups and on the DPI front, all of them trying to put in place the necessary DPI infrastructure to support their pilots. And now this fragmentation which is occurring over there, which I think is a big inhibitor of scaling to real population scale that we need.

So we are going to invest in these hubs that can be points of aggregation. We don’t want to inhibit diffusion. People have the idea of diffusion as a more random process which goes anywhere, and there’s something good about that. But if we can channel the diffusion into these centers of excellence, I think at the country level, the feedback that we’ve had from the governments is that that is a way that we are really going to get to scale more rapidly. Thank you.

Shankar Maruwada

Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathways. Diffusion pathways. Definition is everywhere, right? Pathways by definition is fixed. So it’s how do you spread. a technology in certain fixed pathways towards certain impact. It is indeed a stress. I believe that stress needs to be there because we are talking of the stress of safe AI impact at scale. But it is indeed a challenge, and together we have to solve it very quickly. I want to talk a bit about Minister Esther Dweck’s ministry, MGI, or the Ministry of Management and Innovation. Isn’t that a cool concept? The government of Brazil has a minister and a ministry looking after the idea of innovation and management.

They are collaborating very closely with India on a range of issues, and it’s my honor, Your Excellency, to have you here. Minister, I want to ask you a question. Scale efforts, diffusion. A lot of times fail inside government, not because of technology. But because of procurement process change and accountability, what has to change inside the state for AI to move from pilots to durable public services?

Esther Dweck

Thank you, Shankar. Thank you for inviting me and also for the partnership that we have with India. And Brazil is looking for this partnership with India because of scale. If anything can be scaled up in India, it can be in Brazil because compared to India, we are not such a big country. But compared to many other countries, very large. So for us, very important, this partnership. But when you talk about the problem inside the state, our ministry was created. The whole name is Ministry of Management and Innovation in Public Service. So we are focusing on innovation inside the public services. And we created a special secretary for state transformation because we saw that the state had to be transformed in order to actually be able to have innovation.

Because if we stand with the same way of doing procurement, actually we won’t be. We won’t be able to. do it. So we think that we need, in terms of AI, we need to transform the state in three main areas. The first one is procurement, for sure. And any kind of innovation procurement needs to be changed. So also the infrastructure, especially the digital infrastructure, and of course the governments. And when I talk about the procurement process, usually people are looking for the lowest price, lowest risk, and usually civil servants are very afraid of doing procurement because the auditing bodies are trying to look if they’re doing something wrong. So they usually try to go for the lowest risk possible.

And this is what prevents innovation inside the government, especially because innovation comes with errors. We know that any innovation might come to error. And if the civil servant cannot make any mistakes, then we never innovate. So one of the things that we found out when we’re trying to ask for how to do innovation procurement in the government, the first thing people say, I’m afraid of doing any mistakes, then the auditing body will come after me and then I won’t be able to be a civil servant. So what have we done is to change the mindset of the procurement process. Instead of more process -oriented, we are looking for a more policy -oriented and looking at the outcomes and not only the lowest price thing.

And with many other ministries, we are discussing how to actually build that culture of innovation procurement with this idea that it must fail. And you can also interact with the one you’re buying off. Because, of course, you’re buying something that doesn’t exist. How do you explain to them what you need? So there are a lot of things that you have to change in terms of procurement in order to actually be able to do AI. And, of course, the second thing is the digital infrastructure. As, of course, as Nandan has said before, Brazil, since 2023, when we came here for the G20 in India, we brought this idea of DPI to Brazil very… as something very strong.

Thank you. and we already know that we had something that could be called the DPI, but we didn’t know the concept before. And one of the things that was very important for us was our digital ID and our platform for services, a digital platform for service, which both called gov .br. And based on this platform, you were able to, what we are discussing now in terms of optimizing, but also in having more personalized services, knowing the people, if you know the citizen, we will be able to provide them specialized service, and we’re doing AI to do this, how to actually specialize service, what the people actually need. So I think using this, having a good DPI infrastructure, especially in terms of identification, and be able to also, of course, to have a better data governance.

That’s the third thing I would like to say is the governance inside the state. When we launched our plan for AI, and this morning, today, we had a session on the Brazilian AI plan. And the first thing the president said is that we need our database. He said we need the Brazilian database. We cannot have silos anymore. We cannot have this minister saying, no, this is my data. No one can access this data. So we have to do it, of course, in a private, preserving privacy in a security way. So we discussed all the data governance. We’re about to launch a new decree on data governance. Having every minister to have a chief data officer, someone who actually knows the data, knows how to use the data.

So we are actually looking at these things in order to access from the state to be able to innovate into this AI. Thank you. That’s it. Thank you.

Shankar Maruwada

Wonderful. Thank you. Irina, you’ve been in the IT space for three decades. You’ve seen the Internet thing boom, bust, and now you’re seeing AI. From your vast experience, what is the most common failure mode when AI moves from pilots to everyday workforce, everyday? And what kind of safety infrastructure actually prevent?

Irina Ghose

yeah I think one of the things that we have to remember is that the failure never happens with a big bang it just slowly dies because people just stop reducing the level of interaction they have gradually and you suddenly realize that it’s not relevant anymore so what really needs to happen that you need to keep it in a way that people use it daily and use it in the way that is contextual for each of them. For example one of the reasons why it might fail is because the data sets are speaking across to a country of a different nature which is kind of setting benchmarks in banking and financial systems which is not the same way where in agriculture is the biggest thing that we require hence collecting data for Indian languages nuancing it by say legal, by agriculture by what people are speaking across in that dialect in that language, this is very critical so if I want to look at three things that needs to happen, first of all keep it contextual to the domain, micro domain in which it is required at Anthropic we have kind of worked closely to ensure that we now have Indic language availability for 10 Indian languages from Hindi to Malayalam to Gujarati to Urdu and it’s available in the latest models and it is incrementally improving day by day and the last part I would say is that ensuring that whatever you are doing the ROI that we look at should be if I invest in a language say Bengali how many net new use cases have been opened up because of that and how many more people have got the benefit of that and I think the work that say we are doing with Aikstep and thanks to the fields employed education, healthcare, everything that’s the litmus test that we should be measuring ourselves on

Shankar Maruwada

I want to ask a question to the audience by raising hands how many of you use UPI keep your hands up if you know how UPI works, what’s the protocol behind it what’s the technology behind it hands are steadily coming down this is my point, we don’t care about technology as long as it works, for something to work at population scale technology has to be boring technology has to be invisible till the time it is not, it has not diffused, it is just some magic mystery thing that we all are stuck with figuring out what to do it’s a long journey from technology as magic to technology as normal boring in fact this wise old man once told me when you stop thinking of something as technology that’s when it has diffused 500 years ago this was magical ocular technology.

It allowed someone to see. Now we don’t think of it as technology. A day will come when we don’t think of AI as technology. That is the day we can say that AI has diffused through all of society. We have some way to go for that. Trevor, when you hear of things like Open AgriNet, some exciting work happening, what makes you think that fees like infrastructure versus yet another project that is going to the path of pilotitis, death by pilots?

Trevor Mundeli

Well, I do look a little bit with envy at Open AgriNet. Having looked across the work that the foundation does in agriculture and in health, traditionally the narrative has been how fortunate those health folks are because there’s such huge funding into the health areas, such huge investment in research, in genomics, in human health. and much less on plant genomics, which admittedly is potentially more complex, the clinical trial infrastructures for developing new products on the human health side versus on the agriculture side. But now we come to AI, and I have to say I look at OpenAgronet, and I think that the agriculture community is ahead of human health in terms of the implementation of a system which is personally useful to a farmer smallholder farmer, for instance, being able to get the information they need, being able to determine what crop disease they have to deal with or a disease in their cattle and what the weather is going to be and how they can maximize the finances in their small farm.

All of these types of things I would love to see in the health space, a personal health assistant. In low – and middle -income countries, so many people are not very close to a tertiary hospital. And they may be 10, 20 miles even from a primary health care clinic. Can we not provide for them with a system that can personally provide them with the information that they need in a safe way? And I think Open AgriNet really puts those components of infrastructure together. The way that it’s modular, the way that you can adapt it to the local circumstances, it’s in many ways exactly what we need in that personal health side of the picture. So I only have some envy, but I hope we can duplicate that on the health side.

Thank you.

Shankar Maruwada

Thank you, Trevor. Open AgriNet is just a group of organizations coming together, collaborating, as Trevor said, each bringing in one piece of the puzzle so that together we can create those diffusion pathways. And as Nandan said, that is what allows us to take something from Maharashtra, which took nine months, to Ethiopia in three months. Back to India in three weeks. from agriculture to livestock, from India to Ethiopia, from Asia to Africa and back. That is the exciting possibility that India has been in the journey of for the last 15 years, what we call as DPI. The thing about DPI is when you start with a strong use case in mind, as Arina and others have said, you harness technology, so technology becomes a good slave to a very powerful cause.

Then you take advantage of rapidly evolving technology. Minister Dweck, if you designed a national diffusion pathway for one public service, what would you prioritize first, institutions, incentives, data readiness or governance?

Esther Dweck

Well, it’s difficult to choose only one thing, I guess. Maybe this perspective from management, you’re always looking for some kind of a systemic approach, trying to look at all these things. Together. Together. And actually, we recently have launched a program, an R &D for AI in Brazil. It’s called INSPIRE in English, but in Portuguese means BREATHE, INSPIRE, but the same acronym, which is AI for Public Service with Innovation, Responsibility, and Ethics. And it has this systemic approach inside of it. Because the first thing, we create this new institutional arrangement. It’s not new, but we had in this R &D project, we have the government, of course, we have some state -owned companies, we have some private companies, and our innovation ecosystem in Brazil, all of them bringing together in order to help the government to have new AI platforms.

Because when we, although we’re already using AI in Brazil, we saw that we have a lot of lack of technological expertise and lack of financial support as well. So we’re trying to create this platform where we can actually offer many bodies of the government different solutions that can be used in many different areas, as you said. As I was saying before. So this idea, well, first thing we are discussing to have the data more sovereignty on the data and how to actually use better, but also for the data to be ready to be used. So one thing I was explaining before. So using AI to help to improve our data set. So it’s going both ways.

Another thing is also in the governance perspective, of course, we’re creating, as I mentioned, this shared tools and common practices and trying to share how, and specifically in this project, we’re creating this generative AI platform, and we’re trying to apply to different solutions. So recently, at the end of last year, we have this university enrollment exam for people finishing high school. So we created this whole complete, for them to know when they’re finishing school, what they’re going to do. Are they going to the job market? Are they going? Enroll school to enroll university? How to apply? What’s the best thing for them? So using AI to help them to actually decide this. And they’re doing the same thing for health care, for.

agriculture sector as well. So we’re looking at all these things. And, of course, in capacity building. So we are a lot training civil servants. We have four trails, actually, for people who actually are the managers, the top managers, for IT experts, for people controlling data, and for regular civil servants. Because one thing, when we’re talking about state transformation, we thought the one thing you have to train and to change, of course, is the civil servants. And nowadays they have to have a digital mind. And some of them have been there for many years. They didn’t have the digital capabilities. So we’re training all of them in digital capabilities and specifically on AI as well in order to think how to use this new technology in their regular life in order to improve civil service.

So I think it’s a more systemic approach there.

Shankar Maruwada

Pathways are like digital rails. What should model developers focus on so that AI can plug into these pathways safely across sectors and countries?

Irina Ghose

Very interesting. And I’ll just try to kind of paint the picture by giving a context. Now, think about it. We’re talking a lot about agriculture. It has the last mile. Now, if you were to solve for that farmer day in and day out, there’ll be various kinds of work that they have to do. Look at what is the weather conditions, one source of data. Look at how the crop yield, et cetera, is performing in other source of data. The market prices in other source of data. Whatever has to be done across for reaping and sowing. So these kinds of data, if they want to infuse, anybody wants to infuse AI on top of that, and if you build it every time, it is so cumbersome.

Now, if you kind of do the same thing that, Nandan, you’ve been talking about, at one point of time, all of us are different. We’re different. We’re different. We’re different. We’re different. We’re different. We’re different. We’re different. universal adapter came, it took it away. We all use UPI for digital payments. Do we know anything to do with the technology behind it? Whether it’s earned, whatever is coming across as the small micropayment, we have no idea. So one of the things here to be done is have a universal language which accesses the tools as well as the data. So we came out with this concept in Anthropic in 2024 called the model context protocol. And very simplistically put, I think of MCP as to AI was say what UPI was to payments.

And in effect, what it really does is you develop things once and you make it MCP ready. And anything else that you want to do it further, you do not have to keep on writing again and again. So all the cases of agriculture, healthcare, anything else put together, that can happen seamlessly. Why does it matter for India? There’s a lot of data which already exists in hell. in education, in various ways that citizen services are going across, and that is a rich level of data. So if we kind of make this data AI ready, use the tools which are going across, then the case of diffusion and that accountability of everybody coming together will be that much more quicker.

Shankar Maruwada

Excellent. A lot of people who deploy AI, they have an old notion that it’s like normal software. You buy great software, it is perfected, and you deploy, and you can close that and go away. In AI, that is just the start, because as you use it, data comes in. The data gets better, the models get better. With better models, you provide better services, usage increases, more usage, more data. This cycle, and while it is happening, the models improve, the data improve, so for a lot of adopters, once they go beyond procurement how do you continuously invest to upgrade and evolve? That’s again a very important question. So when we talk of 100 diffusion pathways these are 100 diffusion pathways to safe AI impact at scale which creates a second stress and I’ll come to you on that Trevor.

When lives are at stake where do you draw the line between speed 100 pathways to 2030 and safety and coming from health safety means literally lives, right?

Trevor Mundeli

Yes Shankar and there are a lot of lives at stake and I feel the urgency. Every year we don’t have the next generation of malaria vaccines we’re seeing hundreds of thousands of young children dying. Every year we don’t have a personalized education coach for every child no matter where they are. we see a tremendous amount of human potential wasted. So there is this urgency to get things done and that might make one think very carefully on the safety front and it is that safety issue where people are in the health area saying we need to take a step back, we need to look carefully at the frameworks before we just jump in with like the application I talked about, the self -application, how would that be gated, how would that be guarded.

I do think that because of the excellence of the DPI stack here in India and because of the thousands of application efforts I see, you are going to probe those frameworks for the safe introduction probably first in a context which is, as Nandan was mentioning, the frugal innovation that will be relevant across lower middle income countries and actually beyond. So I do think that we are very much looking at India as a safe introduction. The foundry of AI application. and we want to see those frameworks whereby we can safely introduce the technology. In terms of the technology itself, just having a type of black box system that gives a health recommendation is almost never adequate, almost never satisfactory.

These systems need to be auditable. And I have to say that Anthropic has made quite a lot of progress in their research on how are these concepts, how are these recommendations actually represented in the model. People want to be able to audit that. They don’t just want something that comes out of nowhere. If you have a human clinician that makes an error, you can talk to that person. You can say, well, where did this, why did you think this was the case when you made a misdiagnosis over here? Was it because you didn’t elicit the right question from the patient or you transcribed incorrectly? And that is the kind of transparency that we actually demand of the AI systems at the end of the day.

So I think that… But between the work going on here in India and some of that transparency research, we can get there.

Shankar Maruwada

Thank you, Trevor. Minister Dweck, as you’re thinking of implementing AI solutions at scale, what is the hardest political or economic challenge, and what are some tips on how one should deal with it?

Esther Dweck

Okay. I think it’s kind of a political economy issue now, I think, in Brazil we are looking for. Of course, one thing is the workforce problem, because we may be going to this utopia that no human need to work anymore, and the machines work for us. So how actually create, how divide this wealth in order to come from these machines working? But there’s one point. But more concerning in our current period now in Brazil is about digital sovereignty. Of course, very few countries, maybe only two countries in the world, will be totally digital sovereign right now. But I think we have to. We have to increase our digital sovereignty in terms of being able to.

have our services and be able to operate it, be able to know where our data is, to know how we’ll be able to continue with our services to our populations. So we are discussing a lot of this in Brazil, how to increase our level of digital sovereignty. Of course, we know we’re probably not in a very, in a few years, be totally digital sovereign, but at least we’re to increase. And we’re actually working with our suppliers in order for them to offer us more sovereignty or at least some security that we not have any discontinuity. So I think using the state capacity and using the state procurement purchasing power, it’s very important to do this.

And we’re actually using it in order to talk to our buyers. And we discussed this sovereignty in three levels, in the data level. And for this, we’re bringing back the data to Brazil. We’re trying to have… We have two, as I mentioned before, two federal, state -owned companies that are actually having resident clouds within our companies to know where the data is, but only know where the data is not enough. So we are increasing our operational access to the data. And also, I think the third level is why you’re using technology, something that we’ve been discussing a lot here. And it’s not directly related to AI, but it’s related to digital services. I think one thing that we’re doing together here in India, using a technology that was developed here, a verifiable convention, which was very important for us, we are using right now in two pilot projects yet, but we want to scale it up.

One is related to rural credit, but the second one is related to something that I think the whole world is discussing, how to protect child online. So now in Brazil, we passed a law last year, which is a very important law. It was very quick to pass. After one of the digital influencers showed what was happening to children in the Internet, especially on social media, and we passed the bill and it said by 17 March, you have to know what age the person who’s accessing the Internet is. So how to do this in a way that you protect the privacy? We don’t actually know what people are using. So a lot of things are discussed and we’re trying to do this verified recognition in order to have this age verification in a very simple way, very easy for people and for people not to be afraid that the government is actually looking at the Internet.

So I think this is the way to make things that are actually useful and important to protect our citizens but also to provide them with very good services.

Shankar Maruwada

Thank you. Today’s topic was building publish and scale digital public infrastructure for AI. By 2030, when we would have made a lot of progress on that, we would stop calling DPI digital public infrastructure and we’ll start calling it digital public. intelligence. With that, a big thank you to all my panelists and to the audience. Thank you.

Irina Ghose

Thank you. Shankar, if I can just request you to send a token of appreciation to the panel. Thank you. Now the next session is about to start on a very unique topic, AI for Democracy. So we request all the audience here to remain seated. A very wonderful topic, AI for Democracy, and we are very blessed that today we have with us Honorable Chief Guest, Mr. Om Birlaji, Speaker of Parliament of India, Mr. Martin Chongungji, Secretary General, IPU, Mr. Laszlo Z, Deputy Speaker, Parliament of Hungary, Dr. Chinmay Pandya from All World Gayatri Parivar, Ms. Jimena.

Related ResourcesKnowledge base sources related to the discussion topics (10)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Nandan Nilekani described a farmer‑focused mobile application that now serves 2.5 million users, giving them real‑time price and weather information.”

The knowledge base states that 2.5 million farmers have downloaded the app and that it provides price and weather information [S3].

Confirmedhigh

“The first rollout in Maharashtra took nine months, followed by a three‑month replication in Ethiopia and a three‑week adaptation for Amul’s dairy‑farmer programme.”

Evidence in the knowledge base records the Maharashtra implementation lasting nine months, the Ethiopia rollout three months, and the Amul dairy implementation three weeks [S20] and [S19].

Confirmedhigh

“Nandan announced an ambitious global target of creating 100 diffusion pathways by 2030, backed by a coalition that includes Anthropic, Google, the Gates Foundation, UNDP and other partners, and is open to any additional member.”

The knowledge base confirms the announcement of 100 pathways to 2030 and a coalition that includes Google, the Gates Foundation and UNDP and is open to new members [S21] and [S29].

Additional Contextmedium

“The coalition also includes Anthropic.”

Anthropic is not mentioned in the available sources; the coalition members listed are Google, Gates Foundation and UNDP, so Anthropic’s participation is not confirmed by the knowledge base.

Additional Contextmedium

“Shankar Maruwada defined diffusion pathways as “shared rails” that compress learning curves, costs and risks, enabling safe, large‑scale AI impact across sectors and countries rather than being a single platform or app.”

The knowledge base discusses diffusion as moving beyond single, concentrated LLM deployments toward shared, domain-specific pathways, aligning with this description [S68] and [S69].

Additional Contextmedium

“Irina Ghose said successful diffusion requires (1) localisation into the user’s language, (2) integration into daily workflows, and (3) an iterative deployment model.”

The knowledge base highlights multilingual AI work for Indian languages and the importance of language support for expanding use-cases, which supports the localisation point [S73] and [S74]; the other two prerequisites are consistent with broader DPG principles but are not explicitly cited.

Additional Contextlow

“Irina described the Model Context Protocol (MCP) as a universal “adapter” that lets developers create a model once and plug it into any downstream application, likening it to UPI’s role in standardising digital payments.”

The knowledge base references UPI as an example of a standardised digital-payment interface, providing context for the analogy, but it does not contain information about the Model Context Protocol itself [S23].

External Sources (74)
S1
A Digital Future for All (morning sessions) — – Esther Dweck (Minister, Brazil) discussed DPI for efficient government services, financial inclusion, and environmenta…
S2
(Interactive Dialogue 3) Summit of the Future – General Assembly, 79th session — – Esther Dweck (Minister of Management and Innovation in Public Services of Brazil)
S3
Building Population-Scale Digital Public Infrastructure for AI — – Esther Dweck- Irina Ghose – Irina Ghose- Esther Dweck – Nandan Nilekani- Trevor Mundeli- Esther Dweck
S4
Transforming Health Systems with AI From Lab to Last Mile — I’ll ask you to take a seat. When you said, is there anyone who has not visited a doctor, instinctively I was asking, do…
S5
Transforming Health Systems with AI From Lab to Last Mile — -Trevor Mundel: Dr. Dr. Trevor Mundel (medical degree and Ph.D. in mathematics), Rhodes Scholar, extensive experience in…
S6
https://app.faicon.ai/ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — And welcome. And… And her background is also in this both biomedical field, science innovation field, but also has ext…
S7
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S8
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S9
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S10
https://dig.watch/event/india-ai-impact-summit-2026/regulating-open-data_-principles-challenges-and-opportunities — Thank you so much, Vedashree. That was very concise and even compelling. Especially coming from a regulatory standpoint….
S11
Keynote-Dario Amodei — – Irina Ghos: Managing Director for Anthropic India, has three decades of experience building businesses in India (menti…
S12
Building Population-Scale Digital Public Infrastructure for AI — – Irina Ghose- Esther Dweck – Nandan Nilekani- Irina Ghose
S13
https://dig.watch/event/india-ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S14
https://app.faicon.ai/ai-impact-summit-2026/ai-for-agriculture-scaling-intelegence-for-food-and-climate-resiliance — So we are happy to have support and assistance from MSSRF in that direction. My final question is to Mr. Shankar Maruwad…
S15
AI for agriculture Scaling Intelegence for food and climate resiliance — Thank you, madam. You have rightly pointed out the need to be more sensitive and while developing systems for inclusivit…
S16
Keynote-Rishad Premji — -Mr. Nandan Nilekani: Role/Title: Not specified; Area of expertise: Artificial intelligence (described as pioneer and th…
S17
High Level Session 2: Digital Public Goods and Global Digital Cooperation — – **Nandan Nilekani** – Co-founder and chairman of Infosys Technologies Limited (participated online) Nandan Nilekani, …
S18
https://dig.watch/event/india-ai-impact-summit-2026/fireside-conversation-01 — Thank you so much, Mr. Sikka, for your profound and very interesting remarks. And of course, your work at VNI also exemp…
S19
Building Population-Scale Digital Public Infrastructure for AI — bought which farmers use and millions of farmers today, 2 .5 million farmers have downloaded this app. And this was buil…
S20
Fireside Conversation: 01 — I don’t know, that makes me a grandfather. So I think when you talk about diffusion, and you have to think of AI, everyb…
S21
Building Scalable AI Through Global South Partnerships — Yeah, thank you so much. And you talked about DPI, you talked about the private sector, public coming together. It’s the…
S22
Fireside Conversation: 01 — This fireside conversation featured Nandan Nilekani, co-founder of Infosys and architect of India’s Aadhaar system, and …
S23
Collaborative AI Network – Strengthening Skills Research and Innovation — um well I mean as Saurabhji the chair of the working group for democratization of AI spoke about there are some fundamen…
S24
https://app.faicon.ai/ai-impact-summit-2026/collaborative-ai-network-strengthening-skills-research-and-innovation — So we have to think about it from a user life perspective. So this is really, I think, a bit about the use case adoption…
S25
Keynote Address_Revanth Reddy_Chief Minister Telangana — Socio‑economic impacts and workforce considerations
S26
AI Meets Agriculture Building Food Security and Climate Resilien — Shankar Maruwada describes how the successful development of Mahavistar involved collaboration between multiple stakehol…
S27
AI for agriculture Scaling Intelegence for food and climate resiliance — So we are happy to have support and assistance from MSSRF in that direction. My final question is to Mr. Shankar Maruwad…
S28
Setting the Rules_ Global AI Standards for Growth and Governance — Yeah, no, that’s a great question. I think from sort of a market adoption perspective, a lot of our technology, like gen…
S29
https://dig.watch/event/india-ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — And I’ll give you a small example as to how diffusion is happening. First of all, Shankar, really honored to have worked…
S30
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: Thank you for convening this and bringing this very, very important subject at FORC, like how do we bala…
S31
Al and Global Challenges: Ethical Development and Responsible Deployment — Alfredo Ronchi:Most interesting presentation from the standpoint of China. Thanks a lot for this date. And now we will t…
S32
Safe and Responsible AI at Scale Practical Pathways — Right. So I think my perspective is more as a practitioner because the last almost three decades I’ve been a solution bu…
S33
Opening and Sustaining Government Data | IGF 2023 Networking Session #86 — Another notable challenge was the need to convert data between Arabic and English. This language barrier required meticu…
S34
Open Forum #56 Shaping Africas Digital Future a Forum on Data Governance — The Minister argues that Sierra Leone’s success in digital transformation over the past 6-7 years resulted from strategi…
S35
Democratizing AI Building Trustworthy Systems for Everyone — Crampton argues that none of Microsoft’s five strategic pillars for AI diffusion (infrastructure, skilling, multilingual…
S36
Indias AI Leap Policy to Practice with AIP2 — The discussion revealed tensions between global harmonization and local adaptation needs. Adams argued against one-size-…
S37
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S38
Building Indias Digital and Industrial Future with AI — Speaker 1 highlights a key regulatory challenge where AI systems need to be explainable and accountable, but in security…
S39
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S40
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S41
Operationalizing data free flow with trust | IGF 2023 WS #197 — However, there are calls for the development of horizontal, interoperable, and technologically neutral policy frameworks…
S42
AI for agriculture Scaling Intelegence for food and climate resiliance — It is being designed as a replicable public infrastructure model for India and the entire global south. In partnership w…
S43
Setting the Rules_ Global AI Standards for Growth and Governance — Develop modular, interoperable standards systems that can be adapted across different sectors and use cases without star…
S44
Building Population-Scale Digital Public Infrastructure for AI — Open AgriNet demonstrates successful modular, adaptable infrastructure model for other sectors
S45
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — These key comments fundamentally shaped the discussion by elevating it from technical implementation details to strategi…
S46
African Union (AU) Data Policy Framework — While data localisation is often seen as an expression of state sovereignty, as a possible policy option, data localisat…
S47
NRIs MAIN SESSION: DATA GOVERNANCE — Collaboration is seen as essential for effective implementation and enforcement of data protection laws and regulations …
S48
Building Population-Scale Digital Public Infrastructure for AI — Dweck highlights digital sovereignty as a major political and economic challenge, emphasizing the need for countries to …
S49
Digital politics in 2017: Unsettled weather, stormy at times, with sunny spells — Second, in 2017 we can expect further pressure on data localisation (a practice which requires service providers and/or …
S50
WS #111 Addressing the Challenges of Digital Sovereignty in DLDCs — The main areas of agreement included the need for local data infrastructure, capacity building, harmonized policies for …
S51
Cloud computing and data localisation: Lessons on jurisdiction — A hybrid system – where data localisation is generally prohibited, except for data directly affecting national security …
S52
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — In conclusion, the discussion highlighted the need to overcome the challenges posed by the siloed approach, trade agreem…
S53
Collaborative AI Network – Strengthening Skills Research and Innovation — um well I mean as Saurabhji the chair of the working group for democratization of AI spoke about there are some fundamen…
S54
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S55
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S56
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S57
Building Population-Scale Digital Public Infrastructure for AI — And I’ll give you a small example as to how diffusion is happening. First of all, Shankar, really honored to have worked…
S58
Building Population-Scale Digital Public Infrastructure for AI — Launch 100 diffusion pathways by 2030 initiative with global coalition including Anthropic, Google, Gates Foundation, an…
S59
https://dig.watch/event/india-ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — And this is what prevents innovation inside the government, especially because innovation comes with errors. We know tha…
S60
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Capacity Building Implementation Gill warns against repeating past mistakes in global development initiatives where eff…
S61
Democratizing AI Building Trustworthy Systems for Everyone — Crampton argues that none of Microsoft’s five strategic pillars for AI diffusion (infrastructure, skilling, multilingual…
S62
Indias AI Leap Policy to Practice with AIP2 — The discussion revealed tensions between global harmonization and local adaptation needs. Adams argued against one-size-…
S63
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S64
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S65
Toward Collective Action_ Roundtable on Safe & Trusted AI — Cool. So I think we just have to be very, very careful here of the sort of, you know, the Silicon Valley approach of mov…
S66
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S67
WS #49 Benefit everyone from digital tech equally & inclusively — – Mobile apps that provide farmers with real-time weather data and crop management advice.
S68
Keynotes — Historical Context of Technological Revolutions
S69
Collaborative AI Network – Strengthening Skills Research and Innovation — Diffusion is not about like concentrated western LLMs all together and just deploy it. It’s about actually walking the p…
S70
Panel 3 – Innovations in Submarine Cable Technology and Maintenance & Panel 4 – Legal and Regulatory Frameworks for Cable Protection — It set the stage for discussing future innovations and challenges in submarine cable technology, leading to a deeper exp…
S71
Open Forum #19 Strengthening Information Integrity on Climate Change — This intervention fundamentally challenged the panel’s framing and forced a deeper examination of cultural and ethical d…
S72
AI Meets Agriculture Building Food Security and Climate Resilien — What makes this happen? What is that secret sauce, the design principles? It is the same as DPI. What worked for DPI, we…
S73
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “An interesting fact is that most of the AI models in the world work in English”[41]. “But your AI model works in Indian…
S74
WS #119 AI for Multilingual Inclusion — – Encouraging learning and use of multiple languages Athanase Bahizire: Thank you so much. Very good question. Actually…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
N
Nandan Nilekani
3 arguments171 words per minute531 words185 seconds
Argument 1
100 diffusion pathways goal
EXPLANATION
Nandan announced an ambitious target to create 100 diffusion pathways for positive AI use by 2030, aiming to spread AI benefits globally across sectors and countries. The goal is presented as a collective effort involving multiple partners.
EVIDENCE
He stated that the coalition aims for “100 diffusion pathways by 2030” and that this goal was announced recently, with partners such as Anthropic, Google, the Gates Foundation and UNDP joining the coalition [15-22].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The 100 diffusion pathways target is repeatedly referenced in the discussion, with Nandan announcing it and partners joining the coalition [S19] and it being highlighted as a clarion call for global AI scaling [S21], as well as in the fireside conversation on diffusion and implementation [S22].
MAJOR DISCUSSION POINT
Goal setting for AI diffusion
AGREED WITH
Shankar Maruwada, Irina Ghose, Trevor Mundeli
Argument 2
Pathways compress learning curves, cost and risk, making large‑scale adoption feasible
EXPLANATION
Nandan described pathways as mechanisms that accelerate implementation by reducing time, cost, and risk, allowing others to replicate successes quickly. He highlighted how earlier projects took nine months, then three months, then three weeks, illustrating the compression effect.
EVIDENCE
He explained that “once you have a pathway, then you can get, somebody else can get to the same point quicker” and gave the example of implementation times dropping from nine months to three weeks across different projects [13-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nandan’s illustration of implementation time dropping from nine months to three weeks is documented in the agriculture case study, showing the compression effect of pathways [S19]; the broader notion of diffusion as a general-purpose technology that starts from the user is discussed in the fireside conversation [S20].
MAJOR DISCUSSION POINT
Efficiency of diffusion pathways
AGREED WITH
Shankar Maruwada, Irina Ghose, Trevor Mundeli
Argument 3
Inclusive, positive AI use requires safe diffusion infrastructure
EXPLANATION
Nandan emphasized that AI should be deployed in an inclusive manner so that no one is left out, and that safe diffusion infrastructure is strategic for the world. He linked inclusivity with the need for coordinated, safe pathways.
EVIDENCE
He noted that the initiatives are designed “to improve and make better people’s lives, can meet the aspirations in a very inclusive way so that everybody is in, nobody is left out” and called showing the good use of AI a strategic priority [17-18][31-32].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on inclusivity and safe diffusion appears in the agriculture scaling discussion, which stresses feedback loops and inclusive design [S15]; the fireside conversation also links safe diffusion infrastructure to strategic priorities [S22].
MAJOR DISCUSSION POINT
Inclusivity and safety in AI diffusion
S
Shankar Maruwada
4 arguments133 words per minute1438 words645 seconds
Argument 1
Pathways as shared rails for rapid replication
EXPLANATION
Shankar described diffusion pathways as shared rails that compress learning curves, cost and risk, enabling AI to be used by all of society. He contrasted this with platform or model approaches, stressing the infrastructural nature of pathways.
EVIDENCE
He said, “The diffusion infrastructure we are talking about creating isn’t a platform app or model. It’s shared rails that compress learning curves, cost and risk” [44-47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shankar’s description of pathways as “shared rails” is echoed in the panel summary that contrasts his distributed approach with centralized hubs [S3]; the notion of shared infrastructure for rapid replication is also highlighted in the discussion on diffusion pathways [S24].
MAJOR DISCUSSION POINT
Infrastructure for AI diffusion
AGREED WITH
Nandan Nilekani, Irina Ghose, Trevor Mundeli
DISAGREED WITH
Trevor Mundeli
Argument 2
Balancing rapid diffusion (100 pathways) with safety safeguards is critical where lives are at stake
EXPLANATION
Shankar raised the tension between the urgency of scaling AI quickly through 100 pathways and the need to ensure safety, especially in high‑stakes domains like health. He asked where the line should be drawn between speed and safety.
EVIDENCE
He asked, “When lives are at stake where do you draw the line between speed 100 pathways to 2030 and safety” highlighting the trade-off [265-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between speed and safety is reflected in the risk-control perspective on AI market adoption [S28] and the call for auditable, transparent systems in high-stakes domains [S32]; the 100 pathways agenda provides the speed dimension [S19].
MAJOR DISCUSSION POINT
Speed vs. safety in AI scaling
AGREED WITH
Trevor Mundeli
DISAGREED WITH
Trevor Mundeli
Argument 3
Political‑economic tension around wealth distribution and workforce impacts must be managed
EXPLANATION
Shankar identified political‑economic challenges, such as how wealth generated by AI and automation should be distributed and the impact on the workforce, as a key issue for governments to address when scaling AI.
EVIDENCE
He framed the issue as a “political economy issue” and asked about the hardest political or economic challenge for AI implementation, pointing to concerns about wealth distribution and workforce changes [285-287].
MAJOR DISCUSSION POINT
Political economy of AI
Argument 4
Universal language and standards allow AI to plug into pathways across sectors
EXPLANATION
Shankar argued that a universal language or protocol, similar to UPI for payments, would enable AI tools to integrate seamlessly with diverse pathways, reducing the need for bespoke development each time.
EVIDENCE
He referenced the ubiquity of UPI and suggested a “universal language which accesses the tools as well as the data” to make AI integration easier across sectors [246-250].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for a universal language or protocol is supported by the discussion of language barriers and translation challenges in government data projects [S33] and by the broader call for foundational resources to enable AI democratization [S23].
MAJOR DISCUSSION POINT
Interoperability standards
AGREED WITH
Irina Ghose
I
Irina Ghose
4 arguments163 words per minute1288 words473 seconds
Argument 1
Contextual, workflow‑embedded diffusion is essential
EXPLANATION
Irina stressed that for AI to diffuse at scale it must be presented in the local language, fit naturally into users’ daily workflows, and be iteratively refined. These factors make AI intuitive rather than a specialized scientific tool.
EVIDENCE
She listed three requirements: contextual to the local language, embedded in existing workflow, and iterative improvement [60-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The requirement that diffusion be contextual, workflow-integrated, and iterative is explicitly mentioned in the panel summary on diffusion pathways [S3] and reinforced by the agriculture case study’s emphasis on contextualisation [S19].
MAJOR DISCUSSION POINT
Design criteria for AI diffusion
AGREED WITH
Shankar Maruwada
Argument 2
AI must be contextual to local language, fit existing workflows, and be iteratively improved
EXPLANATION
Reiterating her earlier point, Irina highlighted that AI adoption hinges on language localisation, seamless workflow integration, and continuous iteration to stay relevant for end‑users such as teachers, health workers, and small business owners.
EVIDENCE
She gave examples of teachers in Bihar, health workers in Coimbatore, and small business leaders in Indore needing AI that is intuitive and embedded in their daily tasks [58-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Irina’s three design criteria are mirrored in the discussion of language localisation and workflow embedding for teachers, health workers, and small businesses [S19] and in the broader statement that diffusion must become contextual and iterative [S3].
MAJOR DISCUSSION POINT
Localization and workflow integration
AGREED WITH
Shankar Maruwada
Argument 3
Failure is gradual loss of relevance; maintain domain‑specific data and language support
EXPLANATION
Irina described that AI systems rarely fail abruptly; instead they lose relevance as users stop interacting with them. Maintaining domain‑specific datasets and language support is essential to prevent this slow decay.
EVIDENCE
She noted that “failure never happens with a big bang it just slowly dies because people just stop reducing the level of interaction” and emphasized the need for contextual, domain-specific data and language support [169-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The observation that AI systems fail gradually as users stop interacting, and the importance of domain-specific data and language support, are documented in the agriculture diffusion analysis [S19] and in the language-translation challenges noted for government data projects [S33].
MAJOR DISCUSSION POINT
Failure modes in AI diffusion
Argument 4
Introduce a Model Context Protocol (MCP) as a universal “adapter” for AI tools
EXPLANATION
Irina introduced the Model Context Protocol, a standard that would allow AI models to be built once and reused across applications, similar to how UPI standardized digital payments. MCP aims to simplify integration and reduce duplication of effort.
EVIDENCE
She explained that MCP is “to AI what UPI was to payments”, enabling developers to make tools MCP-ready once and then reuse them without rewriting code [250-254].
MAJOR DISCUSSION POINT
Standardisation for AI integration
AGREED WITH
Esther Dweck
T
Trevor Mundeli
4 arguments167 words per minute1117 words399 seconds
Argument 1
Scaling hubs to aggregate fragmented pilots and provide funding
EXPLANATION
Trevor described the creation of scaling hubs that act as aggregation points for numerous AI pilots, offering funding and coordination to overcome fragmentation and accelerate national rollout.
EVIDENCE
He outlined the hubs in India and Africa, their role in consolidating pilots, providing government-level funding, and reducing fragmentation that hinders scaling [84-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trevor’s proposal for centralized scaling hubs that aggregate pilots and channel funding is described in the panel overview of scaling mechanisms [S3] and contrasted with distributed approaches in the discussion on hub versus shared-rail models [S24].
MAJOR DISCUSSION POINT
Institutional mechanisms for scaling
AGREED WITH
Nandan Nilekani, Shankar Maruwada, Irina Ghose
DISAGREED WITH
Shankar Maruwada
Argument 2
Centralised “scaling hubs” reduce fragmentation and accelerate national rollout
EXPLANATION
He reiterated that centralised hubs can channel diffusion into centres of excellence, allowing governments to scale AI solutions more rapidly and with less risk than a scattered pilot approach.
EVIDENCE
He emphasized that “we don’t want to inhibit diffusion” but that channeling it into hubs “is a way that we are really going to get to scale more rapidly” [96-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The benefit of centralized hubs for reducing fragmentation and speeding national rollout is highlighted in the same panel summary on scaling hubs [S3] and in the commentary on hub-based scaling versus scattered pilots [S24].
MAJOR DISCUSSION POINT
Reducing fragmentation
Argument 3
AI systems must be auditable and transparent, especially in high‑stakes health applications
EXPLANATION
Trevor argued that AI used in health must be auditable and provide clear reasoning, as black‑box recommendations are insufficient for clinical decision‑making. Transparency is needed for trust and accountability.
EVIDENCE
He stated that “a black box system that gives a health recommendation is almost never adequate” and that systems need to be auditable, allowing clinicians to trace why a recommendation was made [274-281].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for auditability and transparency in health AI is emphasized in the practical pathways discussion on responsible AI at scale [S32] and in the risk-control perspective on AI adoption [S28].
MAJOR DISCUSSION POINT
Auditability and transparency
Argument 4
Modular, interoperable infrastructure (e.g., OpenAgriNet) demonstrates how components can be combined for scale
EXPLANATION
Trevor highlighted OpenAgriNet as a modular, adaptable platform that brings together various components to provide personalized agricultural assistance, illustrating a model that could be replicated in health.
EVIDENCE
He described OpenAgriNet as “modular, the way that you can adapt it to the local circumstances” and praised its ability to deliver personalized information to smallholder farmers [185-187].
MAJOR DISCUSSION POINT
Modular infrastructure for scaling
E
Esther Dweck
5 arguments180 words per minute1938 words643 seconds
Argument 1
Procurement reform, digital infrastructure and data governance enable scaling
EXPLANATION
Esther argued that transforming procurement practices, strengthening digital infrastructure, and establishing robust data governance are essential for scaling AI within the public sector.
EVIDENCE
She detailed the need to change procurement to focus on outcomes rather than lowest price, highlighted Brazil’s digital ID platform (gov.br) and digital infrastructure, and stressed data governance reforms including a new decree and chief data officers [128-133][144-150].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Esther’s points on reforming procurement, strengthening digital infrastructure, and establishing data-governance frameworks are covered in the panel overview of governance and procurement challenges [S3].
MAJOR DISCUSSION POINT
Enabling environment for AI scaling
Argument 2
Outcome‑oriented, risk‑tolerant procurement and robust digital ID platforms are needed
EXPLANATION
She emphasized shifting procurement from a risk‑averse, price‑focused model to one that values outcomes and tolerates managed risk, while leveraging digital ID systems to personalize services and support AI deployment.
EVIDENCE
She explained that current procurement seeks lowest risk and price, which stifles innovation, and advocated for a policy-oriented approach; she also referenced Brazil’s digital ID platform (gov.br) as a foundation for AI-enabled personalized services [128-133][145-149].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift toward outcome-based, risk-tolerant procurement and the use of digital ID platforms for personalized services are discussed in the same governance panel featuring Esther [S3].
MAJOR DISCUSSION POINT
Procurement and digital identity for AI
Argument 3
Reform procurement to focus on outcomes, accept managed risk, and foster innovation culture
EXPLANATION
Esther described a shift in procurement mindset toward outcome‑based evaluation, acceptance of some risk, and collaboration with suppliers to build an innovation‑friendly culture within government.
EVIDENCE
She noted the move from process-oriented to policy-oriented procurement, the need to allow failure as part of innovation, and the importance of interacting with suppliers during procurement [128-140].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Esther’s call for outcome-focused procurement, managed risk acceptance, and an innovation-friendly culture appears in the panel summary on procurement reform and digital sovereignty [S3].
MAJOR DISCUSSION POINT
Innovation‑friendly procurement
Argument 4
Strengthen digital sovereignty through resident clouds, data localisation and chief data officers
EXPLANATION
Esther highlighted Brazil’s efforts to increase digital sovereignty by establishing resident cloud services, bringing data back to Brazil, and appointing chief data officers to oversee data use and security.
EVIDENCE
She discussed resident clouds owned by federal companies, the goal of bringing data back to Brazil, and the creation of chief data officer roles as part of a new data governance decree [290-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion of resident cloud services, data localisation, and the creation of chief data officer roles as part of Brazil’s digital sovereignty strategy is included in the governance and digital sovereignty segment of the panel [S3].
MAJOR DISCUSSION POINT
Digital sovereignty
AGREED WITH
Irina Ghose
DISAGREED WITH
Irina Ghose
Argument 5
Building capacity by training civil servants in digital and AI skills is essential
EXPLANATION
Esther stressed that a skilled civil service is crucial for state transformation, describing a training programme that targets managers, IT experts, data controllers, and regular staff to develop digital and AI competencies.
EVIDENCE
She outlined four training tracks for different civil-servant roles and emphasized the need to give them a “digital mind” to use AI in everyday work [221-227].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Esther’s emphasis on civil-service capacity building through multi-track training programmes is highlighted in the panel overview of capacity development for AI scaling [S3].
MAJOR DISCUSSION POINT
Capacity development for AI
Agreements
Agreement Points
Structured diffusion mechanisms (pathways, shared rails, hubs, protocols) accelerate AI scaling and reduce implementation time
Speakers: Nandan Nilekani, Shankar Maruwada, Irina Ghose, Trevor Mundeli
100 diffusion pathways goal Pathways compress learning curves, cost and risk, making large‑scale adoption feasible Pathways as shared rails for rapid replication Introduce a Model Context Protocol (MCP) as a universal “adapter” for AI tools Scaling hubs to aggregate fragmented pilots and provide funding
All four speakers stress that having pre-defined diffusion pathways, whether framed as shared rails, scaling hubs or a universal model context protocol, dramatically shortens deployment cycles and lowers cost and risk, enabling rapid, large-scale AI adoption. Nandan illustrates the time compression from nine months to three weeks [13-15]; Shankar describes pathways as shared rails that compress learning curves [44-47]; Irina proposes the MCP as a universal adapter to reuse AI components across applications [250-254]; Trevor outlines scaling hubs that aggregate pilots and channel funding to overcome fragmentation [84-99].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with calls for modular, interoperable standards that can be adapted across sectors to speed AI deployment, as advocated in global AI standards initiatives and demonstrated by Open AgriNet’s modular infrastructure model [S43][S44].
AI diffusion must be contextual, language‑localised and embedded in everyday workflows
Speakers: Irina Ghose, Shankar Maruwada
Contextual, workflow‑embedded diffusion is essential AI must be contextual to local language, fit existing workflows, and be iteratively improved Universal language and standards allow AI to plug into pathways across sectors
Irina emphasizes that AI must be delivered in the local language, fit users’ daily workflows and evolve iteratively [60-62]; Shankar adds that a universal language or protocol, akin to UPI for payments, is needed so AI can integrate seamlessly across sectors [246-250]. Both agree that localisation and workflow integration are prerequisites for successful diffusion.
Safety, auditability and transparency are critical when AI is applied in high‑stakes domains
Speakers: Trevor Mundeli, Shankar Maruwada
AI systems must be auditable and transparent, especially in health applications Balancing rapid diffusion (100 pathways) with safety safeguards is critical where lives are at stake
Trevor argues that health AI must be auditable and provide clear reasoning to earn trust [274-281]; Shankar raises the tension between speed of diffusion and safety, asking where the line should be drawn for life-critical uses [265-267]. Both converge on the need for robust safety and audit mechanisms.
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on safety and transparency reflects the UN AI Security Council’s focus on algorithmic transparency and rigorous testing, as well as broader AI governance discussions stressing explainability and auditability [S54][S56][S55].
Robust data governance and digital sovereignty are foundational for AI scaling
Speakers: Esther Dweck, Irina Ghose
Strengthen digital sovereignty through resident clouds, data localisation and chief data officers Introduce a Model Context Protocol (MCP) as a universal “adapter” for AI tools
Esther outlines Brazil’s push for digital sovereignty via resident clouds, data localisation and governance structures [290-304]; Irina’s MCP aims to make data AI-ready and interoperable across applications [250-254]. Both see strong data governance and sovereignty as essential enablers for AI diffusion.
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors policy debates on digital sovereignty and data governance, highlighted in the African Union Data Policy Framework and expert commentary on the political-economic challenges of maintaining control over national data assets [S46][S48][S50].
Similar Viewpoints
Both stress that AI must be delivered in local languages and integrated into existing workflows, and that a universal protocol (like MCP/UPI) is needed to enable seamless integration across sectors [60-62][246-250].
Speakers: Irina Ghose, Shankar Maruwada
Contextual, workflow‑embedded diffusion is essential Universal language and standards allow AI to plug into pathways across sectors
Both highlight the necessity of robust, sovereign data infrastructures and interoperable standards to make data AI‑ready and support large‑scale diffusion [290-304][250-254].
Speakers: Esther Dweck, Irina Ghose
Strengthen digital sovereignty through resident clouds, data localisation and chief data officers Introduce a Model Context Protocol (MCP) as a universal “adapter” for AI tools
Both agree that safety and auditability cannot be compromised in rapid AI diffusion, especially for high‑risk sectors like health [274-281][265-267].
Speakers: Trevor Mundeli, Shankar Maruwada
AI systems must be auditable and transparent, especially in health applications Balancing rapid diffusion (100 pathways) with safety safeguards is critical where lives are at stake
Both describe diffusion pathways as shared infrastructure that dramatically reduces implementation time and risk, enabling faster scaling [13-15][44-47].
Speakers: Nandan Nilekani, Shankar Maruwada
Pathways compress learning curves, cost and risk, making large‑scale adoption feasible Pathways as shared rails for rapid replication
Unexpected Consensus
Modular, interoperable infrastructure as a key scaling strategy across sectors
Speakers: Trevor Mundeli, Esther Dweck
Modular, interoperable infrastructure (e.g., OpenAgriNet) demonstrates how components can be combined for scale Strengthen digital sovereignty through resident clouds, data localisation and chief data officers
While Trevor focuses on a modular, adaptable platform for agriculture (OpenAgriNet) and Esther on sovereign, resident cloud infrastructure for government services, both converge on the principle that modular, interoperable technical foundations are essential for scaling AI across diverse domains-a consensus that bridges private-sector pilots and national digital sovereignty strategies [185-187][290-304].
POLICY CONTEXT (KNOWLEDGE BASE)
Recommendations for modular, interoperable infrastructure echo the development of technology-neutral, adaptable standards systems for AI growth and the successful modular public-infrastructure example of Open AgriNet [S43][S44][S45].
Overall Assessment

The panel shows strong convergence on four pillars: (1) the need for structured diffusion pathways or hubs to accelerate AI rollout; (2) the necessity of localisation, language support and workflow integration; (3) the imperative of safety, auditability and transparency in high‑risk applications; and (4) the foundational role of robust data governance and digital sovereignty. These agreements cut across public‑private, sectoral and national boundaries, indicating a shared vision for coordinated, safe and inclusive AI diffusion.

High consensus – most speakers, from government, foundations and industry, articulate compatible strategies, suggesting that future policy and technical work is likely to be coordinated around these shared principles, enhancing prospects for effective, inclusive AI deployment by 2030.

Differences
Different Viewpoints
Centralised scaling hubs vs distributed shared‑rail diffusion pathways
Speakers: Shankar Maruwada, Trevor Mundeli
Pathways as shared rails for rapid replication Scaling hubs to aggregate fragmented pilots and provide funding
Shankar describes diffusion pathways as “shared rails that compress learning curves, cost and risk” and stresses that the infrastructure is not a platform but a common rail for all sectors [44-47]. Trevor proposes creating “scaling hubs” that act as aggregation points for many pilots, providing government-level funding and reducing fragmentation to accelerate national rollout [84-99]. The two approaches differ: Shankar favours a distributed, standards-based rail model, while Trevor advocates a more centralised hub model.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between centralised hubs and distributed pathways reflects the broader integration versus fragmentation debate in digital public infrastructure, as raised in the DNS Trust Horizon discussion [S45].
Procurement risk‑aversion versus outcome‑oriented, risk‑tolerant procurement for AI innovation
Speakers: Irina Ghose, Esther Dweck
AI diffusion requires all‑in commitment and iterative rollout embedded in existing workflows Procurement reform, shifting from lowest‑price, lowest‑risk to outcome‑oriented, risk‑tolerant approaches
Irina stresses that diffusion needs “all in” commitment and that innovators must embed AI into daily workflows, implying a willingness to experiment and accept early-stage errors [56-62]. Esther argues that civil servants avoid innovation because “the auditing body will come after me” and that procurement must move from “lowest price” to a policy-oriented, outcome-focused mindset that tolerates managed risk [128-133][138-140]. The tension lies in how much risk civil servants and innovators should accept during diffusion.
Speed of scaling (100 pathways by 2030) versus safety and auditability in high‑stakes domains
Speakers: Shankar Maruwada, Trevor Mundeli
Balancing rapid diffusion (100 pathways) with safety safeguards is critical where lives are at stake AI systems must be auditable and transparent, especially in health applications
Shankar raises the trade-off, asking “When lives are at stake where do you draw the line between speed 100 pathways to 2030 and safety” [265-267]. Trevor responds that while urgency is high, AI recommendations must be auditable and transparent, noting that “a black box system … is almost never adequate” and that clinicians need to trace why a recommendation was made [274-281]. The disagreement is over the acceptable balance between rapid deployment and rigorous safety controls.
Universal model‑context protocol (MCP) versus national digital sovereignty and data localisation
Speakers: Irina Ghose, Esther Dweck
Introduce a Model Context Protocol (MCP) as a universal “adapter” for AI tools Strengthen digital sovereignty through resident clouds, data localisation and chief data officers
Irina proposes the Model Context Protocol, likening it to UPI, to allow AI tools to be built once and reused across applications without rewriting code [250-254]. Esther emphasizes Brazil’s push for digital sovereignty, describing resident clouds, bringing data back to Brazil, and appointing chief data officers as part of a new data-governance decree [290-304]. The two positions differ on openness: Irina pushes for a cross-border universal standard, while Esther stresses national control over data and infrastructure.
POLICY CONTEXT (KNOWLEDGE BASE)
This clash parallels policy discussions on cross-border data free flow versus localisation, noted in IGF 2023’s trust-focused data-free-flow framework and the AU’s nuanced stance on data localisation as a sovereignty tool [S41][S46][S47][S48].
Unexpected Differences
Irina’s emphasis on language localisation and universal protocol versus Esther’s focus on national data sovereignty
Speakers: Irina Ghose, Esther Dweck
AI must be contextual to local language, workflow‑embedded and iterative Strengthen digital sovereignty through resident clouds, data localisation and chief data officers
While both discuss localisation, Irina sees multilingual support and a universal protocol (MCP) as a way to accelerate diffusion across borders, whereas Esther prioritises keeping data within national borders and building sovereign cloud capacity. The clash between cross‑border standardisation and national data sovereignty was not anticipated given the otherwise collaborative tone of the panel.
POLICY CONTEXT (KNOWLEDGE BASE)
The disagreement underscores the same sovereignty versus standardisation dilemma, with policy literature highlighting the need to balance national data control against interoperable protocols for AI diffusion [S46][S48][S50].
Overall Assessment

The panel shows strong consensus on the need for diffusion pathways and inclusive AI, but substantive disagreements emerge around the architecture for scaling (distributed rails vs central hubs), the degree of procurement risk tolerance, the balance between speed and safety, and the tension between universal standards and national digital sovereignty.

Moderate to high: While participants share common objectives, the divergent views on implementation mechanisms could impede coordinated action unless reconciled. The implications are that without alignment on scaling models, procurement reforms, safety standards, and data governance, the 100‑pathway target may face fragmentation, slower adoption, or regulatory friction.

Partial Agreements
All participants endorse the overarching aim of creating diffusion pathways to spread AI benefits at scale and agree that some form of enabling infrastructure (whether shared rails, hubs, or reforms) is needed. They differ on the precise mechanism, but share the goal of rapid, inclusive AI diffusion [15-16][44-47][60-62][84-99][128-133].
Speakers: Nandan Nilekani, Shankar Maruwada, Irina Ghose, Trevor Mundeli, Esther Dweck
100 diffusion pathways goal Pathways compress learning curves, cost and risk, making large‑scale adoption feasible Contextual, workflow‑embedded diffusion is essential Scaling hubs to aggregate fragmented pilots and provide funding Procurement reform, digital infrastructure and data governance enable scaling
Takeaways
Key takeaways
The concept of “diffusion pathways” is central: shared, reusable rails that compress learning curves, cost and risk, enabling rapid replication of AI solutions for public good. A global coalition aims to create 100 diffusion pathways by 2030, involving governments, foundations, and tech firms (e.g., Anthropic, Google, Gates Foundation, UNDP). Successful scaling requires AI to be contextual (local language, domain‑specific data), embedded in existing workflows, and continuously iterated. Fragmented pilots hinder scale; “scaling hubs” in India and Africa are proposed to aggregate pilots, provide funding, and act as centers of excellence. Public‑sector scaling depends on reforms to procurement (outcome‑oriented, risk‑tolerant), robust digital infrastructure (digital IDs, service platforms), and strong data‑governance frameworks. Safety and auditability are non‑negotiable for high‑stakes applications (health, agriculture); models must be transparent and auditable. Interoperability standards such as the Model Context Protocol (MCP) are needed so AI components can plug into pathways across sectors and countries. Building digital sovereignty (resident clouds, data localisation, chief data officers) and capacity‑building for civil servants are essential for sustainable adoption. Political and economic challenges include managing wealth distribution from AI‑driven productivity and ensuring inclusive, equitable outcomes.
Resolutions and action items
Launch of a global coalition to develop 100 diffusion pathways by 2030. Establishment of scaling hubs in India and several African nations (Rwanda, Nigeria, Senegal, Kenya) to fund and coordinate large‑scale roll‑outs. Brazil’s INSPIRE (AI for Public Service with Innovation, Responsibility, and Ethics) program to create institutional arrangements, data‑sovereignty mechanisms, and civil‑servant training. Announcement of a forthcoming Brazilian decree on data governance, mandating chief data officers in ministries. Development and promotion of Anthropic’s Model Context Protocol (MCP) as a universal adapter for AI tools. Commitment to train civil servants at multiple levels (managers, IT experts, data stewards, general staff) on digital and AI competencies. Agreement to continue sharing best‑practice pathways (e.g., Maharashtra, Ethiopia, Amul) to accelerate future implementations.
Unresolved issues
How to precisely balance rapid diffusion (the 100‑pathway target) with rigorous safety and auditability standards, especially in health applications. Specific details of outcome‑oriented procurement policies and how to institutionalise managed‑risk approaches across diverse government agencies. Concrete steps to achieve full digital sovereignty for countries that currently rely on foreign cloud providers. Mechanisms for ongoing monitoring and evaluation of diffusion pathways to ensure they remain inclusive and do not create new inequities. Long‑term governance model for the global coalition: decision‑making processes, funding responsibilities, and accountability.
Suggested compromises
Adopt an outcome‑oriented, policy‑focused procurement model that tolerates managed risk rather than insisting on lowest‑price, lowest‑risk contracts. Use scaling hubs as focal points for diffusion while still allowing decentralized, “random” diffusion to preserve innovation diversity. Pursue incremental digital sovereignty (resident clouds, data localisation) rather than an all‑or‑nothing approach, acknowledging current dependencies. Implement modular, interoperable standards (e.g., MCP) to allow different AI solutions to plug into existing pathways without forcing a single vendor or architecture.
Thought Provoking Comments
We went from nine months to three months to three weeks by learning from lived experience; we call these ‘pathways’ that let others reach the same point faster, aiming for 100 diffusion pathways by 2030 to spread positive AI use.
Introduces the concrete concept of ‘diffusion pathways’ and demonstrates how iterative learning dramatically accelerates AI deployment, framing the whole panel around a measurable global ambition.
Sets the agenda for the discussion, prompting other speakers to define what pathways mean in practice, leading to Shankar’s historical analogy, Irina’s criteria for diffusion, and Trevor’s scaling‑hub proposal.
Speaker: Nandan Nilekani
The crucial ingredient in past industrial revolutions was not better inventions but diffusion – the spread of know‑how, trust and institutional capability that lets societies adopt technology at scale.
Reframes the conversation from technology creation to systematic diffusion, linking historical lessons to AI and emphasizing the need for structured pathways.
Creates a turning point that shifts the panel from describing projects to discussing mechanisms of spread; it directly elicits Irina’s focus on contextualisation and Trevor’s scaling‑hub concept.
Speaker: Shankar Maruwada
AI deployment rarely fails because of model performance; it fails because of perceived complexity. For diffusion we need (1) local language context, (2) integration into existing workflows, and (3) an iterative, user‑centric approach.
Distills the practical barriers to scaling AI into three clear, actionable dimensions, moving the debate from high‑level ambition to on‑the‑ground implementation details.
Guides the subsequent dialogue toward concrete requirements—language support, workflow embedding, and iterative design—prompting Shankar’s UPI analogy and Esther’s procurement reforms.
Speaker: Irina Ghose
We are creating ‘scaling hubs’ in partnership with governments to aggregate fragmented pilots, provide funding, and act as centers of excellence that channel diffusion rather than letting it remain random and scattered.
Identifies fragmentation as a major barrier and proposes a concrete institutional solution, bridging the gap between pilot projects and national scale.
Leads the conversation to discuss how to organise diffusion pathways, influencing Esther’s remarks on institutional change and reinforcing Shankar’s point about the stress inherent in fixed pathways.
Speaker: Trevor Mundeli
In government procurement we must shift from lowest‑price, lowest‑risk buying to outcome‑oriented, policy‑focused procurement that accepts failure as part of innovation, while also strengthening digital infrastructure and data governance.
Highlights systemic bureaucratic obstacles and offers a transformative approach to public‑sector innovation, linking procurement, infrastructure, and governance.
Triggers a deeper examination of institutional barriers, prompting Shankar to ask about the hardest political/economic challenges and leading others to discuss safety, data sovereignty, and the need for new procurement mindsets.
Speaker: Esther Dweck
For technology to work at population scale it must become ‘boring’—invisible and taken for granted, like UPI for payments; only when AI is no longer seen as magic does true diffusion occur.
Uses a vivid metaphor to capture the end goal of diffusion, emphasizing user experience over technical novelty and setting a benchmark for AI adoption.
Shifts the tone from aspirational to pragmatic, inspiring Irina’s proposal of a universal ‘model context protocol’ and reinforcing the need for seamless integration discussed earlier.
Speaker: Shankar Maruwada
We’ve created a ‘model context protocol’ (MCP) – a universal language for AI models, analogous to UPI for payments, so developers can build once and plug into any downstream application without rewriting code.
Proposes a technical standard that could operationalise the diffusion pathways, turning the abstract idea of “rails” into a concrete interoperable protocol.
Extends the earlier UPI analogy, prompting discussion on standardisation across sectors and countries, and aligning with Trevor’s call for auditable, modular systems.
Speaker: Irina Ghose
AI systems, especially in health, must be auditable and transparent; a black‑box recommendation is never sufficient—users need to trace why a decision was made, similar to questioning a human clinician.
Emphasises safety and accountability, introducing a critical dimension to scaling AI in high‑stakes domains and linking technical design to trust.
Deepens the conversation on safety, leading to Esther’s remarks on data governance and digital sovereignty, and reinforcing the need for robust diffusion pathways that embed auditability.
Speaker: Trevor Mundeli
Overall Assessment

The discussion was driven forward by a series of pivotal insights that moved the panel from high‑level ambition to concrete mechanisms for scaling AI responsibly. Nandan’s diffusion‑pathway vision provided the overarching goal, while Shankar’s historical analogy reframed the challenge as one of systematic spread rather than invention. Irina’s three‑pronged diffusion criteria and Trevor’s scaling‑hub proposal supplied actionable levers, prompting Esther to expose the bureaucratic bottlenecks in procurement and data governance. The recurring UPI metaphor and Irina’s model‑context protocol anchored the abstract idea of pathways in tangible, interoperable standards. Finally, Trevor’s emphasis on auditable AI introduced the essential safety dimension, ensuring that speed does not eclipse trust. Collectively, these comments redirected the conversation toward institutional design, technical standardisation, and governance, shaping a nuanced roadmap for achieving the 100 diffusion pathways by 2030.

Follow-up Questions
How can progress toward the goal of 100 diffusion pathways by 2030 be measured and tracked?
Nandan announced the 100 diffusion pathways target and a global coalition, but did not specify metrics or monitoring mechanisms, indicating a need for research on measurement frameworks.
Speaker: Nandan Nilekani
What are the most effective methods to assess the return on investment (ROI) of language localization for AI models in diverse Indian languages?
Irina emphasized contextual language and ROI when adding new Indian languages, suggesting further study on how to quantify benefits of language support.
Speaker: Irina Ghose
How can the modular infrastructure of Open AgriNet be adapted for personal health assistants in low‑ and middle‑income countries?
Trevor expressed interest in replicating the agricultural AI model for health, indicating a research gap in transferring the approach to the health sector.
Speaker: Trevor Mundeli
What privacy‑preserving, verifiable age‑verification mechanisms can be deployed at scale to protect children online while respecting digital sovereignty?
Esther described Brazil’s new age‑verification law and the challenge of balancing privacy with protection, highlighting a need for technical solutions and policy research.
Speaker: Esther Dweck
What standards and protocols are needed for a universal "model context protocol" to enable seamless AI integration across sectors and countries?
Irina introduced the Model Context Protocol (MCP) as a universal adapter, but its design, adoption, and governance require further investigation.
Speaker: Irina Ghose
What frameworks and tools are required to make AI recommendations auditable and transparent, especially in high‑stakes health applications?
Trevor stressed the necessity of auditability for AI health recommendations, pointing to a research need for robust auditing frameworks.
Speaker: Trevor Mundeli
How effective are scaling hubs in aggregating fragmented AI pilots and accelerating national‑scale deployment, and what best practices can be identified?
Trevor described scaling hubs as a solution to fragmentation but did not provide evidence of impact, suggesting a need to study their efficacy.
Speaker: Trevor Mundeli
How can governments balance rapid AI deployment (speed) with safety and ethical safeguards in life‑critical domains?
Trevor highlighted the tension between speed of diffusion pathways and safety in health, indicating a need for policy and risk‑management research.
Speaker: Trevor Mundeli
How can digital sovereignty be increased while maintaining interoperability with global AI services, and what governance models support this?
Esther discussed Brazil’s push for digital sovereignty and the challenges of data location and control, calling for research on sovereign yet interoperable architectures.
Speaker: Esther Dweck
What capacity‑building approaches are most effective for upskilling civil servants in AI and digital mindsets, especially those with long tenure?
Esther mentioned training programs for civil servants but did not detail optimal methods, indicating a need for research on effective public‑sector AI education.
Speaker: Esther Dweck
How can public‑sector procurement processes be reformed to encourage AI innovation while managing risk and accountability?
Esther highlighted the current risk‑averse procurement culture and the need for outcome‑oriented policies, suggesting further study on procurement reform.
Speaker: Esther Dweck
What are the key components of a "digital public intelligence" system that evolves from digital public infrastructure, and how can its impact be evaluated?
Shankar projected a future shift from DPI to digital public intelligence without defining its architecture or metrics, indicating a research agenda.
Speaker: Shankar Maruwada
How can AI‑driven platforms like Blue Dot be designed to create inclusive employment opportunities across diverse economies?
Nandan referenced the Blue Dot job platform as part of diffusion pathways but did not elaborate on design or impact, pointing to a need for study on AI‑enabled job creation.
Speaker: Nandan Nilekani

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Indias Digital and Industrial Future with AI

Building Indias Digital and Industrial Future with AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how AI, telecom networks and data sovereignty intersect within national digital public infrastructure, emphasizing the convergence of these domains as a strategic priority [1][5-9][12-14]. Julian noted that India’s long-standing digital public infrastructure-from identity to payments-demonstrates the impact of scale, innovation and public purpose, positioning the country as a pivotal player in this space [11][15-17].


Speakers described the evolution of networks from simple connectivity to intelligent platforms that embed AI for fraud detection, digital identity verification and real-time decision making [39-41][64-66][69-71]. Rahul illustrated how Airtel’s massive BTS and fiber footprint underpins billions of UPI transactions and OTP-based services, creating a trust layer for payments and lending [51-58][61-68]. He also highlighted the rollout of sovereign-cloud capabilities that keep data and control planes within India, addressing concerns about foreign jurisdiction [235-244][247-255].


Several participants warned that parallel digital infrastructures risk fragmentation and urged open APIs, harmonised standards and blueprints to ensure interoperability and efficiency [22-25][204-209][220-224][226-230]. Deepak explained that data sovereignty must extend beyond physical localisation to include control over standards, decision-making and long-term strategy, and that collaborative contribution to international standards is essential [155-164][170-176][178-180]. Martin raised regulatory frictions as networks become AI-driven, and the panel responded that accountability, explainability and adaptable frameworks are needed to guide AI deployment [276-278][280-286][288-294].


Deepak argued that India’s open, protocol-based DPI model, free of restrictive licensing, can be adopted by other countries without costly IP constraints [318-327][330-336]. He added that diplomatic and research institutions are helping export this model, emphasizing equity, ethics and ecological efficiency [339-347]. Mansi echoed that the World Bank’s DPI blueprint, built on India’s experience, provides a flexible reference for emerging economies and encourages mobile-data-driven services such as credit scoring and planning [353-360][363-366].


The discussion concluded that coordinated standards, open interoperable infrastructure and responsible AI integration are critical for scaling trusted digital public services worldwide, with India positioned as a leading exemplar [23-25][312-314][364-368].


Keypoints

Major discussion points


Telecom networks are evolving into intelligent, AI-enabled infrastructure.


The panel stressed that networks are no longer passive data pipes but programmable layers that embed AI for real-time decision-making, identity verification, fraud prevention and emergency services. Julian described this shift to “intelligent, programmable and trusted layers” and how they now shape AI model performance and edge optimisation [14-18]; Rahul illustrated concrete services such as OTP delivery and Aadhaar-enabled payments operating in sub-2 ms [55-62]; Speaker 1 added that contextual enrichment of network data now feeds directly into banking and authentication decisions [102-108].


Digital sovereignty goes beyond data localisation.


Sovereignty was framed as strategic control over standards, AI models and the governance of the infrastructure, not merely where data is stored. Julian linked AI-driven DPI to “strategic control over the infrastructure” [19-22]; Deepak expanded the concept to include physical, administrative and citizen-choice dimensions, warning against “walls” that block two-way data flows [140-150][155-166]; Rahul highlighted practical sovereignty slices – data residency, control-plane location, operational control and jurisdictional exposure (e.g., US CLOUD Act) [235-244].


Avoiding fragmentation through global standards, open APIs and collaborative blueprints.


The speakers warned that siloed public-digital-infrastructure and private solutions create duplication and trust gaps. Julian called for “interoperability, open APIs, harmonised frameworks” to prevent fragmentation [23-25]; Martin’s question raised the risk of parallel DPI layers and the need for GSMA OpenGateway APIs [76-78]; Speaker 1 described how TSPs expose open APIs for fraud, lending and digital-identity services [119-126]; Mansi distinguished “standards” (prescriptive) from “blueprints” (flexible, best-practice guides) and advocated their use to accelerate inclusive outcomes [219-226].


Indian use-cases demonstrate scale and citizen-centric impact.


Rahul cited the UPI ecosystem processing 28 lakh crore rupees in a single month, Aadhaar-linked OTPs delivered in <2 ms, and Airtel’s suite of AI-driven spam/fraud blockers that embed trust into everyday transactions [51-62][65-70]; later he explained Airtel’s sovereign-cloud offering, emphasizing local data residency, control-plane ownership and the need to be selective about which data stays within the jurisdiction [235-250].


India’s DPI model as a blueprint for the Global South.


The discussion turned to exporting India’s open, interoperable DPI architecture to other emerging economies. Deepak highlighted the open-protocol, royalty-free nature of the Indian model and the diplomatic/soft-power mechanisms that support its diffusion [318-327][330-338]; Mansi reinforced that the World Bank’s “digital public infrastructure” reports and blue-print approach are already shaping policies in other countries, and that mobile-data-driven credit, fraud-management and planning use-cases are being replicated abroad [353-368].


Overall purpose / goal of the discussion


The session aimed to move from high-level ideas about AI-enabled telecom, digital sovereignty and standards to concrete next steps: identifying practical actions, fostering collaboration among regulators, operators and multilateral bodies, and leveraging India’s DPI experience to guide other economies at various stages of digital development [28-31].


Overall tone and its evolution


– The conversation began with a formal, optimistic opening (Julian’s keynote) that celebrated India’s achievements and set a collaborative agenda.


– As the panel progressed, the tone became more technical and diagnostic, with speakers detailing specific network capabilities, regulatory nuances, and the risks of fragmented architectures.


– Toward the end, the tone shifted to constructive and forward-looking, emphasizing global cooperation, open-source blueprints, and the potential for India’s model to empower the Global South. Throughout, the atmosphere remained collegial and solution-oriented, with occasional brief interjections from the audience.


Speakers

Julian Gorman – Role/Title: Head of APAC, GSMA; Representative from GSMA.


Area of expertise: Telecom industry, AI, digital public infrastructure. [S1][S2]


Rahul Vatts – Role/Title: Chief Regulatory Officer, Airtel.


Area of expertise: Telecom regulation, digital payments, AI, data sovereignty. [S3][S4]


Deepak Maheshwari – Role/Title: Representative, Center for Social and Economic Progress (CSCP).


Area of expertise: Digital sovereignty, data localization, policy frameworks. [S5][S6]


Speaker 1 – Role/Title: Unspecified (panelist discussing TSPs and DPI infrastructure).


Area of expertise: Telecom service providers, digital public infrastructure, open APIs. [S7][S8][S9]


Debashish Chakraborty – Role/Title: Moderator, GSMA.


Area of expertise: Telecom, AI, digital public infrastructure. [S10][S11]


Audience – Role/Title: Audience members (including professionals, academics).


Area of expertise: Varied (e.g., public administration, cybersecurity). [S12][S13][S14]


Mansi Kedia – Role/Title: Representative, World Bank.


Area of expertise: Development finance, digital public infrastructure, standards and blueprints. [S15][S16]


Additional speakers:


Martin – Role/Title: Representative, Vodafone Idea.


Area of expertise: Telecom regulation, AI-driven network platforms.


Matan – Role/Title: Unspecified participant referenced in the discussion.


Area of expertise: Contextual data, digital infrastructure.


Ambika – Role/Title: Unspecified (mentioned as intended recipient of a question).


Area of expertise: Not specified.


Vijay Agarwal – Role/Title: Audience member, jewelry manufacturer, AI enthusiast.


Area of expertise: Jewelry manufacturing, AI applications in IoT.


Full session reportComprehensive analysis and detailed insights

The session opened with Debashish Chakraborty linking the convergence of artificial intelligence, telecommunications and data sovereignty to the broader theme of Digital Public Infrastructure (DPI) [1]. Julian Gorman, head of APAC GSMA, then set the agenda by describing GSMA’s role in uniting the mobile economy and positioning the discussion at the intersection of intelligent telecom networks and national DPI [5-9]. He highlighted India’s pioneering journey-from early identity and payment systems to today’s expansive digital commerce and data-empowerment platforms-arguing that the country now sits at a pivotal point where AI, real-time data and autonomous systems reshape the function of telecom networks [11-14][15-17].


Julian further argued that modern mobile networks have moved beyond simple connectivity to become “intelligent, programmable and trusted layers” that directly influence AI model performance, edge optimisation, fraud prevention and the security of digital identities [14-18]. The moderator echoed this, noting that networks are no longer passive carriers of data but active platforms where AI is either an add-on or embedded, enabling real-time decision-making for citizen-centric services [39-41]. Martin (Vodafone Idea) reinforced the point, explaining that converged platforms such as the Fraud-Risk Indicator (FRI) and the Digital Intelligence Platform expose contextual data via open APIs, enabling multiple operators to collaborate without siloing [112-126].


Concrete illustrations of this evolution were provided by Rahul Vatts of Airtel. He quantified the scale of India’s DPI, noting that in January the UPI system processed 28 lakh crore rupees across a billion users, underpinned by more than a million base-transceiver stations, 500 lakh km of fibre and thousands of edge data centres [51-58][61-68]. He described the ubiquitous OTP and SMS messages-delivered in under two milliseconds-as a “layer of trust” that enables secure payments and Aadhaar-linked transactions [55-62]. Rahul also outlined Airtel’s suite of AI-driven spam and fraud-blocking products, which create friction for malicious calls and thereby reinforce ecosystem trust [65-70]. Building on these operational examples, Martin highlighted the need for common standards to govern AI-driven services, noting that digital-intermediary regulations and purpose-bound data-privacy laws now raise questions about the applicability of existing frameworks [280-286].


Rahul’s “four-slice” sovereignty framework-covering data residency, control-plane location, operational control and jurisdictional exposure such as the US CLOUD Act-was presented as a practical tool for assessing sovereignty [235-256]. He added that quantum-resistant techniques are already being explored for Aadhaar and that Airtel has launched a sovereign-cloud offering [247-255], and he claimed that Airtel Cloud can handle “around 140 crore transactions per second” [258-262].


The panel highlighted different emphases – Julian focused on strategic control of infrastructure, Rahul on technical slices of sovereignty, and Deepak on citizen agency and participation in standards bodies [19-20][235-256][157-176]. Deepak expanded the sovereignty discussion by distinguishing three layers: physical/administrative control for sensitive data, citizen-driven choice for personal data, and active participation in global standard-setting bodies, warning that “walls” blocking two-way data flows undermine both innovation and inclusion [157-176][140-150][155-166]. He also referenced India’s long-standing digital-infrastructure heritage, citing the 1858 submarine cable and the 1854 Telegraph Act [140-150], and mentioned the World Bank’s “World Standard Development Report on Standards” as a guiding document [318-327]. Deepak introduced the “EOSS” (Equity, Ethics, Ecology) framework, stressing minimal material, energy and water footprints for sustainable DPI [330-336].


Mansi Kedia (World Bank) reinforced the need for global standards and flexible blueprints, arguing that open, interoperable frameworks are essential to avoid fragmented parallel DPI layers and to capture efficiency, trust and innovation benefits [219-232][204-218]. She also noted ongoing collaboration with the Bank for International Settlements on a “Finternet” – a unified financial-infrastructure layer [353-360].


All participants agreed that open APIs, harmonised standards and collaborative blueprints constitute the three pillars needed to prevent duplication of effort and to scale trusted digital services. Julian highlighted that fragmentation-whether technical, regulatory or geopolitical-slows progress, while Martin emphasized that open APIs such as GSMA’s OpenGateway enable operators to share contextual data without creating silos [23-25][112-126][219-232].


Regulatory challenges for AI-enabled networks were examined. Martin pointed out that AI-driven services raise questions about the applicability of digital-intermediary law and that data-privacy regulations now require purpose-bound data collection [280-286]. Both speakers agreed that dynamic regulatory frameworks, co-created with regulators, are needed to address explainability, accountability and the evolving definition of digital intermediaries [279-294][285-289][294-301].


In response to these issues, several concrete actions were proposed. Debashish noted that many OpenGateway APIs have already been certified by GSMA and are being rolled out with operators [127-130], and he called for the development of referenceable AI-telecom playbooks to guide explainability and accountability [272-277]. Martin suggested that industry-wide standards or playbooks be co-created with regulators to cover AI-driven fraud-scam protection [279-294]. Deepak urged India to increase its participation in multistakeholder standard-setting bodies (GSMA, ITU, ISO) to shape global AI standards while preserving strategic autonomy [317-322]. Mansi recommended that the World Bank continue to disseminate DPI blueprints and facilitate South-South knowledge exchange, especially in mobile-data-driven services [353-360]. The audience’s “data-embassy” concept-a wearable ring for local storage of KYC and medical data-was noted as a potential research avenue for future secure personal-data storage solutions [371-374][375-381].


Debashish also recalled historic IRCTC data-collection practices that pre-date many modern DPI initiatives [371-374]. The session concluded with a reaffirmation of consensus: telecom networks are now intelligent, AI-enabled platforms that underpin DPI; digital sovereignty must encompass control over infrastructure, standards and AI models; and open, interoperable standards together with flexible blueprints are essential to avoid fragmented DPI layers and to foster trust, efficiency and innovation [28-31][23-25][312-314][364-368]. While the panel emphasized different aspects of sovereignty and the balance between prescriptive standards and adaptable blueprints, they agreed that coordinated multistakeholder action-through open APIs, sovereign-cloud designs and collaborative standard-setting-will be pivotal in scaling trusted digital services both within India and across the Global South [23-25][312-314][364-368].


Session transcriptComplete transcript of the session
Debashish Chakraborty

convergence of AI, telecom, and data sovereignty all weaved around the digital public infrastructure. I’m Devish. I represent GSMA. I’ll request Julian Gorman, head of APAC GSMA, to give his keynote address and then we start with the panel discussion. Julian.

Julian Gorman

Good morning, everyone. Warm welcome, distinguished guests, colleagues and partners and speakers who have joined us today. It’s a great honour to actually open this session for GSMA. GSMA, for those who don’t know, is the global organisation uniting the mobile economy, that means mobile operators and the ecosystem, to unlock the power of connectivity so industry and society thrive. And this session really goes to the core of that around intelligent networks, intelligent telecom networks for digital public infrastructure, a topic that sits right at the intersection of where the telecom industry is heading and where national digital public infrastructure is heading. And that’s where we’re being built. Of course. India is really at a pivotal point in its digital journey and a key player in this space.

They’ve been on the digital public infrastructure journey for a lot longer than the rest of us, but over the last decade, we’ve really seen the rise of digital public infrastructure recognised from identity and payments to digital commerce and data empowerment and has shown the world what is possible when scale, innovation and public purpose come together as delivered inclusion, trust, economic impact at a level few countries have achieved. But as we enter this next phase, which is shaped by AI, real -time data and increasingly autonomous systems, we need to ask a fundamental question, and that is what are the role the telecom networks play in this new digital infrastructure? For years, networks were viewed simply as connectivity providers and that view is changing.

Today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure. and they’re shaping how AI models perform and will perform and how services are optimised at the edge, how fraud is stopped before it happens and how digital identity remains secure in a world of growing complexity. In India, networks already support core DPI functions, identity verification, payments, emergency response and major public service platforms. As AI becomes embedded in these systems, the networks don’t sit back anymore in the background it becomes part of the decision -making fabric providing context and priority for tokens or the critical elements of data which digital public infrastructure information is the predecessor of. Through this, the network becomes a contributor to governance, resilience and trust.

And that brings us to the second major theme of the day, digital sovereignty. In an AI -driven world, sovereignty is no longer just about where the data is stored, it’s about having strategic control over the infrastructure. The key to this is the ability to manage the infrastructure the standards, and increasingly, the intelligence that underpins the national digital system. Countries want to know, how do we build AI -enabled public infrastructure that is safe, interoperable, and aligned with national priorities, while still remaining connected and interoperable with global markets and innovation? This is exactly where global standards matter. Fragmentation, whether technical, regulatory, or geopolitical, slows down. Interoperability, open APIs, harmonized frameworks, help countries scale confidently, while staying part of the global digital economy.

India is uniquely positioned to show how this balance can be achieved. Open, yet sovereign. Scalable, yet secure. National in ambition, but global in design. And our goal today is not just to talk about these themes, it is to translate them into direction. To identify practical next steps. To create space for collaboration. and to learn from India’s experience in ways that matter for economies that are at every stage of digital development. So I’m looking forward to the discussion and to the concrete actions we can shape together and I look forward to very big contributions from the panel today and also to hear more from the audience later. So thank you. Debashish, I hand over to you.

Debashish Chakraborty

Thank you, Julian. Thanks for the opening remarks. Am I audible? Looks like yes. So let’s begin. We have a fantastic panel here of experts. So let’s start with this discussion. So what we have seen over the past few decades that telecom networks have evolved. They have evolved a lot from just enabling voice to powering mobile broadband to becoming the trusted digital infrastructure that we use today underpinning the modern economies, right? So today’s network are no longer passive carriers of data. They are becoming intelligent platforms where AI is deployed either as an add -on or embedded already into the network, where digital identity is authenticated, where fraud is mitigated, where sovereignty over data and decision -making is increasingly exercised.

As India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain trusted, interoperable, and globally compatible while avoiding fragmentation and duplication. And that is the conversation which we aim to explore today. Let me start with Rahul, who is the Chief Regulatory Officer for Airtel. Rahul, we often talk about digital public infrastructure as applications and platforms, but at the foundation sits the network which you drive. So from Airtel’s perspective, what makes the telecom networks uniquely positioned in the digital world? It is as India’s trusted infrastructure layer. beyond just connectivity.

Rahul Vatts

Thank you, Devashish and GSMA for this particular session. It’s a session of particular interest to me as a user in the digital ecosystem and of course to the entire digital fraternity because if there’s one thing which India is doing great, it’s really the digital public infrastructure to the extent that President Marcon yesterday actually mentioned about it. It’s the biggest export which India has done across the globe. So let’s talk about what’s really happening today. If you look at the data of January alone, India transacted 28 lakh crores rupees of money through its UPI infrastructure. It was spread across a billion people and all this is happening on what? On what is the foundation layer? It is the foundation layer, is the connectivity layer.

and so for us at Airtel this is not just a plumbing job it’s the very heart of the foundation we are laying for trust how are people transacting this much amount of money because they trust the ecosystem to which they really want to do this and so beneath this layer is really the connectivity which has powered the country look at the numbers of connectivity in a country like ours we have got more than a million BTSs powering the entire country we have got more than 500 lakh kilometers of fiber running across in various shapes and forms across the country we have got as an industry more than a thousand edge and large hyperscale data centers now can you imagine each of the mobile switching center carries a load of at least 30 to 50 million people sometimes or even larger at times so this is the scale with which the infrastructure is becoming the layer with which we are operating what is all this enabling let’s look at that that.

What it is enabling is every transaction you do, there is a OTP or SMS which is coming out, right? So this OTP and this SMS is what? It’s a layer of trust that people are trusting the message which they are trying to get on their system. Let’s look at the Aadhaar enabled payment system. More than 500 million rupees done on that alone. And how is that enabled? Through a connectivity which is happening in less than 2 milliseconds. So this again is an example of that same ecosystem. Let’s go further. What’s really happening and how are we doing? I don’t know how many of you actually visited the Airtel stall. We have got solutions where banks can use the telco indicators to make a smart choice about giving you loans, right?

We rank a person’s history based on a low risk or a high risk which enables the bank to be able to take a smart decision in a matter of milliseconds. Remember, in India, it’s not the large loans that matter. A lot of loans which are happening in the ecosystem are less than 200 lakh rupees, right? Just 2 lakh rupees or below are also a large amount of loans which happen. there is a financial risk fraud indicator which the department has created banks can dip into that risk indicator and also get a score out of that to say okay what is it that we are really you know trying to get out of this all this is what the layer is let’s look at what vs telcos are doing vs telcos are giving you trust to say that the call you are giving call you’re receiving is spam free or not right we have got a at least three products launch over last one year we first launch our you know solution which warned you about a suspected spam right then we went ahead we started blocking fraudulent links you know basis the large database we created with you know global players like google and open fish and mavener at the third stage we just launched around two weeks back a very powerful product you know one of the reasons for spam is urgency that i’m calling you please share your otp urgently right and to remove that now we have created a friction you know one of the reasons for spam is urgency that i’m calling you please share your otp urgently right if you are on a call you get a flash message saying please be careful you are on a call you’re receiving otp this may be spam so it creates a friction for those 30 seconds to say do you want to really do this or not all this is what this is uh reinforcement of the trust we want to create in the ecosystem let me go a little larger uh we are operating in large countries uh uh you know across the globe and one of the things we have been doing wonderfully well in africa is to really take the digital public infrastructure blueprints from india and take them to africa uh so it’s all about identity it’s about payments you know it’s about how they are able to transact and we have got a solution called dpi inbox right which we are in conversation with a lot of african leaders to be able to transplant the india stack onto the african ecosystem and how do we do that we are giving a bundle of hardware and a software we are giving a very air -capped cloud you to do that and we are creating the entire ecosystem for them so that they are able to implement a digital public infrastructure stack in their countries.

So really, Devashish, it’s about trust which we try to create with infrastructure layer but we get smart and make people’s life easier and customers’ life easier is what we are

Debashish Chakraborty

Thanks, Rahul. Those were very key messages which you gave in which the network is being used for citizen -centric services and that’s how the network has evolved the last few years. Coming to you, Martin. Martin represents Vodafone Idea. You heard Rahul speaking about how the network is being used for various citizen -centric services for fraud mitigation, for taking care of spams. A lot is being done by the mobile network operators, right? But my question here is there’s also a growing discussion globally today about avoiding parallel digital infrastructure. structures. India is building new DPI trust layers for authentication and fraud prevention. How do we ensure that the efforts which the MNOs, the mobile network operators are making adding layers, how do we ensure that there is no that these complement and not duplicate the operator -led capabilities like Open Gateway APIs that GSMA has?

Speaker 1

So, in fact, I was part of one of the entity which set up and contributes to the largest DPI infrastructure today. I used to earlier be associated with the NPCI and then moved on here to Telco for the past five years now. So, the overall DPI infra, if I were to go by, I would want to answer this by bringing in four key words that I want to associate myself with in this. One, context and enrichment. And the second thing that I wanted to touch upon is serviceability and purpose. So when the entire DPI infrastructure evolved for the country, it evolved with two core purposes to be addressed with, right? So we were wanting to take the entire digital infrastructure to reach the last mile civilian.

We also had the objective of financial inclusion to be driven by the country. So the DPI framework was created to meet these two core objectives. The role of a TSP in this, by and large, was to ensure that the goal of digital India and financial inclusion landed up reaching the masses. That’s the role that TSP played. And with every net new tech evolution that has happened, there are various things that come in. Fraud evolved, so because banking happened in doorstep. fraud also started happening in doorstep. You don’t have to go and loot a bank today. You can loot thousands of individuals in the most easiest manner and fraud evolved that way. So in each of these contexts, while we realize the Digital India vision and the financial inclusion for the country as a whole, the DPI networks played a role, TSP’s played a role to ensure that these realizations come in handy.

Now, Rahul briefly touched upon a few of them. We are limited TSPs in the country, three, four of us comprehensively, who work in conjunction context. Amongst us three, we land up working together. So I still remember those days when, as from my previous entity going to TRAI, asking them to land up sitting up, how do I find out fraudulent mobile numbers yearly? Today, we look at it as FRI, which is exposed by, the DOT themselves today to multiple other financial institutes, which can go and look up into and then take a decision. decision. There is something called as digital intelligence platform, which again, amongst all three collaborated TSP data, which is converged and provided by the DOT themselves to rest of the financial institutions to look into.

Now, all of these, I will bring back my word around context, right? So these are information that multiple of us as TSPs are able to provide, provide, collate and make it available. Who can consume? Any of these providers, because fraud is not happening to me as TSP. For me, if there is a call that is connected between person A and a person B, it’s revenue to me. But for a bank, while in call, something else is going, that is a context. And this context is something that you can provide back to enrich the data. And with this enriched data, making a decision making for what do they want to. I see an Aadhar, verification happening live from a location called A.

while at the same time there is a call happening showing that the presence of the person is in B, it does not matter to a telco because for me both are actually revenue. But for an authentication entity versus an entity which is approving a financial transaction, they may consider them as a fraud. So the context and enrichment of the context associated with the data, TSP today has the ability to provide a large amount of context -driven information to these individual players whereby they can consume them for their own utilization and make active decisions. So that’s the way that I would want to try and comment. One good part is at least all three of us, four of us are operated in converged platform.

We have done the experience with DLT that we set up during the earlier days of spams. Now spams were those days only. The unwanted telemarketers messages that were coming, it has evolved to spam. Spam has become scam. So now we are working towards how do we overcome scam beyond scam, whatever comes. Now there are digital errors, humongous money being lost. So as TSPs, we work in conjunction, put them in order, collaborate with the likes of COI and DOT to set up infrastructure as open APIs and then allow these APIs as interfaceable for institutions who would want to take decisions appropriately. Rahul touched upon digital lending, right? So there is not only, if you look at countries serviced today by more than 1100 member banks across the country.

We might be knowing as sitting in metros, we might be remembering only few banks, but to service such a large nation, we have 1100 member banks. Imagine these guys don’t have to always go back to civil only and provide a lending. You may want to relate back by postpaid consumers, the quantum of money that they pay frequently, etc. It’s an inclusive decision. Those are open APIs we are able to set up. And India is. We have been forefront to set it up and we have operated it way too well already. is what I would want.

Debashish Chakraborty

By the way, your team is also working extensively with the GSM team on the GSM OpenGate APIs. Many of them have been even certified now. I can tell you that. Thanks for that context in which you are talking about contextualization of data. That’s again a unique perspective that you’re talking about. Moving on to Deepak Maheshwari. Deepak represents CSCP, Center for Social and Economic Progress. Deepak, you have been attending and speaking in this conference for the last couple of days. Data sovereignty, I’m sure, is a term which you would have encountered several times. I want to ask you this, Deepak. How should India define data sovereignty in an AI -driven DPI era beyond just data localization and control?

But how should India define data sovereignty without control over standards, decision -making systems, and long -term strategy? strategic autonomy.

Deepak Maheshwari

Thank you, GSMF, for having us here. When we are looking at this whole issue about digital sovereignty, data localization, etc., and data localization itself, we could look at it in different ways. For example, it could be about just the physical location of the data. That’s one. That’s a pretty obvious one. The second is also about data context, as Matan was just mentioning, in terms of what is the local context. So, for example, a lot of people think about data localization only in terms of local languages. But suppose you are seeing weather, and it shows you weather in Hindi here in Delhi, but of New York, probably it might not be that useful. So you also need local context.

And then beyond all these things, awfully what is happening is, and again, this is not such a new concept of sovereignty as such. So people have been talking about sovereignty. It’s been around for a fairly long time. Of course, the terminal of sovereignty is the fact that it’s not just about the data the lexicon has evolved but this whole notion has awfully become much more important for example even in India we had the digital, when we looked at the previous versions of the data production law if you look at the previous reports which never become the policy which is the non -personal data framework again in all those things we had this notion that India’s data should remain in India.

Another thing I mean in February 2019, 7 years back we had something called draft e -commerce policy. Now the tagline of that however was India’s data for India’s development. It was not about commerce. It was more about data. So from that perspective when we look at today and even when I was member of the METI’s committee in 2018 when first time the government set up a committee on AI, again this whole thing came up that okay what about data here. Now this is something that we need to look at in three different ways. One is yes Yes, there is some sort of data which India should have within its own physical as well as administrative control. So obviously things related to defence, national finances, etc., you would like to do that.

Second is as far as citizens’ data is concerned, some of that data, yes, so UIDI, voter database, etc., obviously that type of thing, yes. But there is other type of data for which citizens themselves may like to exercise their choice and may like to exercise their own agency in terms of using that data not only in India but also outside India. For example, if I apply for a visa to another country, I will have to provide my data to that country. So there is no way that it can happen without that. And then the third thing is in terms of business aspect when we look at it. Now in terms of businesses, on one hand we are seeing in India, and we are very proud of it, that for the past, three decades, we have emerged as a global outsourcing hub.

are the global hub for data coming from all over the world and which is being processed here. But at the same time, if we try to create these walls around us, that okay, India’s data cannot go outside, but we expect that outside data should continue to come in. I think there’s a challenge in that. There’s a dilemma in that. There’s a dacotomy. Because these are walls. If we create these walls, and these are not walls, because in fluid dynamics, if we go back to our school physics, the walls are something that do allow one -way traffic, not two -way traffic. But walls are two -way isolations. So that’s another thing that we should keep in mind.

So when we’re talking about digital sovereignty within the context of AI, yes, obviously, there are things that we do want to have here and we should continue to do that. But there are also things where we do need more collaboration. So for example, one of the terms that he used was about control. a school, and you’re talking about a school, and you’re talking about I would like to control the standards so much as contribute to those standards. So, for example, whether it is GSMA or CGPP, ISO, ITU, IEEE, et cetera, I mean, so many other standard organizations, whether they are plurilateral, whether they are multistakeholder in whichever form, they all have certain mechanisms of people and countries to participate in that decision making.

So rather than controlling that standard, the effort should be, the endeavor should be about contributing to that standard making as a participant, as a contributor, and then evolve it. Obviously, when you are contributing and you are collaborating, you won’t have everything your own way. There will always be inevitably some give and take because sovereignty by itself in a globalized world has a challenge because the moment we talk about any international organization, we are talking about international organizations. whether it is UN, whether it is WTO, whether it is ITU, whether it is an organization like GSMA, if we want to work there, we’ll have to give up something to get something. The important thing is how do we create an institutional mechanism that we have, are in a position that whatever we are giving, we believe that we are getting more than that.

So there should be some sort of incentives around that. And the last thing that I want to mention is that, yes, often we have been talking about that India’s digital public infrastructure itself is a massive digitalization which is happening, but actually it is not so new. It’s more than one and a half centuries old. Because the original telecom networks that came was in the telegraph era, and that was also in dots and dashes. So it was a binary world even at that time. And people may or may not believe it, but India got its first sub – cable in the same year that the US got. And that was in 1858. Just four years after the first submarine cable came up first time between UK and France.

India got its first law Vivek in 1854. The first Indian Telegraph Act came in 1854. I have written a lot about this in this report. I mean it is available online at CESAC website if people are interested. Using a 3C framework. So carriage, content and conduct. But what is more important is in this world of AI is not just the carriage which is of course fundamental as I mentioned. Carriage is fundamental because without that you just won’t be able to do anything. Content, what’s going through it. But more importantly in terms of

Debashish Chakraborty

Beautiful insights. Thanks for taking us back to the concept of walls and walls. I like to come to Mansi now. Mansi sitting here is representing the World Bank. Manasi, from World Bank’s experience, we are talking about standards and we are talking about the DPI era. What are the risks you see when public digital infrastructure and private digital capabilities, Matan spoke about it briefly, when these two, the public digital infrastructure and private digital capabilities are built in silos, and why are global standards essential in accelerating inclusive digital outcomes?

Mansi Kedia

and Raul spoke about a lot. So systems coming together help build trust and therefore having independent systems means there are more points of, more vulnerability in the system. So systems come together to build trust. Systems have to come together for efficiency. I think that’s the biggest economic argument against a lot of things that you were saying about why banks are coming together, why is data coming together. So that is the, efficiency is the other thing. And the third which was mentioned but again not articulated was innovation. So how mobile data is now becoming a source of data for lending. I mean why are we using that as understanding credit risk and fraud risk and not something else.

So there’s innovation happening on something that was never understood to be for that purpose. So systems that operate in silos, whether it is the public sector or the private sector. Close it. Sorry. Whether it’s the public sector. Maybe it’s off. Oh I didn’t have it on, I’m sorry. I have a loud voice, so I hope everyone was able to listen to me. So I think the risk of building systems in silos, whether it is the public sector or the private sector, is essentially missing out on efficiency capabilities, innovation capabilities, and building trusted ecosystems, which is actually nothing but the foundations of digital public infrastructure. You used standards. I think the World Bank works more towards the ideas of blueprints.

We have been doing a lot of work on trying to develop blueprints, which are slightly more flexible, adaptable, but bring together best practices from different countries and see how they can be made more adaptable to different contexts, something that Deepak sir was saying in his initial remarks, that you want to make systems that bring you the operational ideas and principles, but don’t necessarily require. They may be prescriptive in terms of how they need to do some. So when you have a standard, you know it’s prescriptive, and that’s how the networks are running. So for that, you need a standard. But when you’re building systems. I think the World Bank is approaching it more from a blueprint point of view.

So last year, the bank came up with a digital public infrastructure and development report where it articulated what it meant by digital public infrastructure. What are its principles? What are the objectives? What is DPI? What is not DPI? And I think that’s the way we are going to go ahead, even with AI, AI commons, building common infrastructure, to be able to determine the pathways for the future, which countries can adapt to in their ways. It need not necessarily become, I mean, I’m just trying to distinguish between standards and blueprints here, because standards then get into ideas of commercialization and, you know, there has to be a process around it and there’s a whole private sector play.

Here there’s a private sector play and a public sector play, but the idea is to work more on the approach than on a particular way of running something.

Debashish Chakraborty

perspective back for data sovereignty. So I’d like to ask you as AI moves deeper into network operations, right, not just at the surface level, what does data sovereignty practically mean for an operator in terms of data storage and control, edge processing, cloud reliance, control of the AI models?

Rahul Vatts

Yeah, thank you. I think one of the biggest misconceptions we all have today is, you know, what exactly is sovereignty? And a lot of people confuse to say that any hyperscale account, if it is housed in India, for example, or that country becomes a sovereign, you know, infrastructure. I think nothing can be away from growth than that statement. Why do I say that? I think if I have to define what is really sovereign for me, I will at least take three or four slices, you know, into it. first slice for me will be is the data residing in the country or not and the answer to that may be yes you know it may be residing in the country it’s not a big deal hyperscaler clouds do reside in the country the second indicator for me will be is there a digital sovereignty you know in that data and digital sovereignty for me will be is the control plane of that cloud within India or not in India right how are you really controlling that data and the cloud and the answer to that is not a single hyperscaler will have the control plane in this country that’s the fact the third indicator or a slice for me will be really about the operational sovereignty so you are saying that you want to upgrade the network you want to put a patch on the network right you want to put a software in the network where are you doing it from the fact is you are not doing it locally again most likely you are again doing it outside the fourth indicator for me and a very important one is the jurisdictional sovereignty right today under the US cloud act for example is it not true that if the US government so wants they can demand data now why should any other territorial power have a control on my data right so for me while the answer for data sovereignty may be it is locally residing but the fact is the control pane will not be in this country the fact is that we will not have even the patches coming up within this country and the fact is that we will be subject to jurisdictional controls so how are telcos you know getting aware about this only last week I read about DT you know Daoshi Telecom and they’ll just launch the sovereign cloud offering in Europe why did they launch and by the way six months ago Airtel launched its own sovereign cloud offering and the answer to us was very simple we were already managing data of nearly 500 million people and we were able to get a lot of data and we were able to get a lot of data in our network and we realized where is the data housed?

We said within our own networks. So we really have the capabilities to manage that complex data set. Then why is it that I cannot offer the same thing to my customers? And that’s where this whole, and that’s why telcos are having a renewed interest into getting into the sovereign situation. Why is it important? And let me be very selective about this. Do we need hyperscaler clouds in the country? I’m saying yes, we do need. Because if there are efficiencies of scale, if there are better products to be used, why not? But tell me, why should a KYC data of my customer be sitting outside with somebody? Why should the health record of citizens in this country be sitting outside this country?

Why should any critical data set which relates to defense or security agencies sitting outside this country? I think we have to get selective. We use the efficiencies of scale to the best party who is available to give that solution. But we should get selective. Get selective on what data? should reside and remain in control within this jurisdiction. I think that is an important part and that I think is a discussion we need to do. If I go to the market today, there are a lot of players selling Sovereign Cloud. But really, I mean, there is no sovereignty which is involved. But I think AI rests on data, right? And we cannot take the right decisions on data if we cannot really control it in the proper sense.

Hence, we require dynamism in our regulations and policies, but we also require sovereignty to be practiced in real sense for us to be able to do that. Airtel Cloud, which we made, we do around 140 crore transactions per second. That’s the bandwidth we have built. It was very interesting that day when the Prime Minister came to Airtel stall, he was asking, Rahul, what is the capacity of the thing you have created? And I told him, you tell me, sir, what is the capacity you want us to create, right? It’s really up to you. You have to guide us and say, we want to have these multiple use cases. Thank you. lining up the country and we are most happy to do that.

So I think we are in a very good place. We have got very robust infrastructure. And how do we now navigate this world of AI and provide a real opportunity and sense to our players within the ecosystem is what we are really looking forward to.

Debashish Chakraborty

You reminded me from this conversation which we were having just a couple of days ago when someone was talking about data sovereignty and he said, it’s so utopian to talk about data sovereignty where if we slice and dice, then you realize where is the sovereignty. And you touched on that. Thanks for that point. Martin, I’ll come back to you. This was actually meant for Ambika, but you have to deal with this. So from Vodafone Ideas regulatory lens, what are the biggest policy frictions emerging as networks become AI -driven platforms? If you see any regulatory challenges, how can these be met with data sovereignty? slowing innovation?

Speaker 1

So I’ll try and answer them in two perspectives. We heard our Honorable PM mentioning AI being responsible and reasonable. The word he used was reasonable in nature, multiple location, right? So it brings in, and there are multiple other contexts with reasonability that comes our way, one being explainability, another being accountability, and so on and so forth. So today, if you look at we as TSPs, TSPs are governed under the ambit of what we want to call ourselves with unified license, which is narrated by DOT. In some of these examples that we, with Rahul touched about, I touched about, and whatever World Bank team as well related back, we are able to see that our portfolio has expanded beyond the conventional TSP governed under the US.

license and today looking at the expanded approach that we are offering to market whether monetization not monetization thank god at least the data privacy is enacted now apparently i’m also the dpo for the firm so by virtue of which when we touched upon this area called data localization or what we would say is data sovereignty i think we largely misinterpreted is my personal view around that data privacy the dpdp at least clarifies that data collected has to be defined with a purpose we put in with a purpose now thankfully although i’m a tsp base is we falling under the ambit of a significant data fiduciary most likely we will be also governed by the data privacy laws of the country So there are regulations which are governing us possibly properly well.

So if I narrate this in three or four broader perspective of looking at accountability and explainability, when we leverage AI, we would want the AI to come and explain. Now, is it covered under the ambit of UL or in the data privacy? Maybe no at all, right? So we would want somewhere, Mansi actually narrated it very well, which is we would want somewhere a referenceable standard coming our way, where all of us can relate back easily and apply back. It could be blueprint, it could be playbooks, it could be. So such framework, does it exist for easily adaptable manner? The larger entities like us, we will be the first one possibly to invent the way to do through, make it as a playbook.

Related back to somebody who can make it as a blueprint and make it as a standard, then apply back to. the rest of the industry as a whole. So that’s the first and foremost. So the role of a TSP also is changing today, right? So from a conventional telecom provider, today we are talking about the previous example that I highlighted as an intermediary providing additional data inside. Now there is a law for digital intermediaries. Now the purpose for which a civilian has shared the data to me is for some other purpose. But the purpose beyond the purpose that he has shared to me, if I have put it to from a monetization standpoint, do I apply the ambit of digital intermediary also on me?

That’s a, that’s a, I wouldn’t want to comment as a, should my regulator look upon that and then put that also as applicable to me. But those are evolving space that we are looking at. And the last very famous topic amongst telcos that is floating around is on the spam and the scam protection, right? So here, let’s look at from again, Honorable PM, perspective, which is reasonable AI. Most of us associate reasonable AI back to explain. Now, imagine we have deployed scam solution which auto blocks things and we would want that AI to explain. Why did I block you? If it were to be blocked, then what am I looking at? I’m actually advancing the ability of scamster to know why am I blocking him so that he refines himself to not get blocked.

So that comes in the context of security. Do I do I make a framework? Do I make a guideline to tell here I would not want to have an explainability where security becomes a far more important element as compared to. So frameworks have to evolve. We need to have standards, but standards do not have the ability to make it universally applicable in all possible manner. So standards are taken, applied back as per individual enterprises and the context that we have to put them to use and then make it work. So I look. Look forward. Regulators will be innovative in allowing us to make the choices as appropriately while regulations can continue to evolve appropriately.

Debashish Chakraborty

Thanks, Martin. I’ll take this conversation slightly global with my attempt, Deepak. How do you think India can leverage its DPI and telecom -led digital architecture to provide a credible, scalable model for the global south, particularly countries seeking digital sovereignty without technological isolation?

Deepak Maheshwari

Okay. So when we are looking at somebody offering any technical solution to someone else, typically it comes with certain – It often comes with certain intellectual IPRs, intellectual property rights. So, for example, somebody is using a particular technology, so there could be patterns, there could be copyright, et cetera. Now, when India is offering its DPI -led model, nothing of that sort is going. Okay. So countries are able to adopt. It’s a framework. It’s a philosophy. And there’s an open protocol. So they can adopt it. They can adopt it. and they can change it the way they wish. So it is really open in that sense. So that’s one very important difference compared to let’s say some other country or company offering some particular technology but then it also involves certain type of monetization in terms of this is what you continue to pay us if you are scaling it to let’s say 1 million population, this is what you will pay us if you are doing it for 10 million or 100 million, this is what you will do.

India doesn’t ask for that type of thing. So that’s one very strong distinction. The second thing is in terms of the enablement. The enablement is also happening not just in terms of offering this as a technical sort of assistance, it is also happening through multiple other organizations. So for example, we have a research and information systems think tank under the Ministry of External Affairs and others is the Indian Council of World Affairs. So they are also doing a lot of work in terms of developing intellectual frameworks and capacity to do this as a matter of diplomacy itself. so that’s another dimension which is not often seen but it’s again a matter of soft diplomacy so for example three years back in 23 at ICW again I had proposed a framework called EOSS which was again basically about taking DPI in India I mean you can of course create a different acronym etc globally and again the focus was more around interoperability security etc there the other aspect is about standards so Mansi did distinguish between standards and systems or blueprints as she mentioned but one very important document I would again refer to a World Bank only so of course she did mention about the DPI report but even more recent document which has just come up a couple of months back from World Bank is the World Standard Development Report on Standards okay so I mean we all you look at traffic lights you look at traffic lights and you look at the traffic lights and you look at the traffic lights okay the three red amber green And this traffic light, the current traffic light standard came up only in 1968.

It’s not very old, okay? But it did happen. And this has become globally acceptable. But the way the design is, yes, you can put it vertical, you can put it horizontal, and there are other variations. So this is what it is doing there. So I think the way India is doing this is something that we are doing a lot of enablement across the global south. In fact, I just published a policy brief called Global South’s AI Pivot by CG of Canada just last Friday. Again, it talks about three things, equity, ethics, and ecology. So India is not only talking about things like, okay, it should be reasonable, it should be responsible, it should be accessible, it should be inclusive, accessible, all of that.

But also looking at things from an efficiency perspective. Efficiency is not just financial efficiency. Here we are talking about resource efficiency. So how do we manage these things with minimum? footprint of material, of energy, water, things like that. So, and this again goes back to something like the Prime Minister keeps on talking about this life, which is lifestyle for environment. Now this whole philosophy of

Debashish Chakraborty

Thanks, Deepak. I’m conscious of time. Mansi, last one to you. You know, India’s approach to the DPI built on open, interoperable and scalable digital rails is increasingly influencing the global conversations. How do you see India’s DPI model shaping digital development strategies across emerging economies?

Mansi Kedia

Thank you. I’ll keep it really short. I think at the bank we started working on ID for development and G2P and fast payments even before this whole big DPI push happened in India and particularly that became more socialized through the G20 process. and many other actors came across foundations, think tanks, technology companies, and started to socialize the idea of DPI and the DPI approach to digital transformation. India, surely for the vast amount of experience and scale and heterogeneity that it has, offers excellent evidence on what works and what doesn’t work. And it’s really great that a lot of the people who were part of the foundation and building of the DPI have now gone ahead and tried to take this to other countries in a way that is adaptable to them.

And there are so many organizations, without taking names, lest I miss out on other important ones, I don’t want to take that chance, but there are several organizations who are doing a fabulous job of doing that. And the government itself, so whether actively or indirectly, they are also trying to talk to the world about how the DPI approach works. And more actively, you know, in UPI and NPCI, as Martin was mentioning, there’s active collaboration on making these fast -paced… and systems work in collaboration with BIS to see can we actually think of the Finternet, the idea of the Finternet that came up with BIS. So I don’t see this dying down. I think we have a lot of, like I said, evidence of the foundations as well as now sectoral applications.

So there are just particularly because this is GSMA session and mobile, I don’t want to forget mentioning this really important part about how the Department of Telecom has begun to think about utilizing mobile data while the telcos are thinking from credit perspective and fraud management. They’re also thinking of it very actively in terms of using it for planning and mobility, which I think is really fabulous. It’s not as if other countries haven’t done it, but the DPI approach that they are taking towards it to scale the access to data, to make models available, to provide compute, and build that whole stack is not something that has happened. And obviously it’s going to evolve. I don’t think it’s perfect.

feel the pressure of making it perfect at go but this learning experiences will surely inform how other countries can do it. Some of the things that we are trying to do it at population scale. Yes exactly.

Debashish Chakraborty

So I think if I can just have one question from the I can see three hands already how much time do we have? Do we have a question for two one question gentlemen please state your name and to whom do you want to address this question

Audience

Mike I am Vijay Agarwal and I am interested in AI by profession I am manufacturer of jewelry so what I wanted to propose was why don’t we have a product like a ring kind of product where the privacy data the KYC data resides on that physically only on that item which is on the body and then we can if it leaves the body it leaves in an encrypted form only and it can only be collated with another key for the purpose for which it has been consent has been given and there is a blockchain record to it.

Debashish Chakraborty

You mean in the form of a jewelry?

Audience

Yeah, so we have Adha ring for every Indian and it will store the KYC record, the medical record which could be accessed in case of emergency but there should, all these control layers that you are talking about could be in the form of cryptography. The concept of data embassies as part of the discussion on data sovereignty, so is there a good case for maybe India to offer data embassies? obviously it will be on a multilateral but any thoughts on that

Deepak Maheshwari

I would say yes if it is on reciprocal basis

Rahul Vatts

let me try and address the first part which you were trying to say I think today it’s not the problem of your data being insecure with Aadhaar I think it’s very secure right there are lot of things which Aadhaar does there is also the masking which they have started so the leakage of data or private data is really not the issue out here the data going out has got various other forms and factors particularly the way the government is taking the data from users it is the government which has to really start looking at for example telcos are required to share the subscriber data every month in physical copies why would you do that right so it is not really the digital aspect which is a problem it is really how you are managing the data is a problem and I think quantum work has already started sir I think Aadhaar itself is working on that on data embassies Vikram I think I completely endorse you know Deepak it cannot be just me right look around and have it and so let’s play it right but you cannot expect the world’s largest data creator and consumer to be the ones to start offering this first it is a two way street right for too long I think as a country we have been you know in a sphere where we are supposed to give and we are not supposed to take anything that has to change

Debashish Chakraborty

the organizer is already standing on my head just wanted to say one thing only mentioned in terms of government taking data so about 20 not now of course now IRCT doesn’t do it but till about 15 years back or so if you are creating an IRCTC ID for first time it used to ask even your marital status and there were apparently no benefits or disadvantages and it was a compulsory field by the way I would like to thank each of the speakers here to make it a very engaging conversation, thank you Mansi Rahul, Deepak, Matan for your time and to have this session, thank you very much audience thank you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (27)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“GSMA is the global organisation uniting the mobile economy.”

The knowledge base states that GSMA is “the global organisation uniting the mobile economy” [S2].

Confirmedhigh

“India’s early identity (Aadhaar) and payment (UPI) systems provide a strong foundation for AI development and digital commerce.”

Jeetu Patel notes that Aadhaar and UPI form a strong foundation for AI development in India [S80].

Additional Contextmedium

“Digital Public Infrastructure (DPI) requires a governance framework that balances efficiency, equity, openness, security, and innovation.”

The knowledge base describes DPI as a driver of digital transformation that needs a governance framework balancing those very dimensions [S75].

Additional Contextmedium

“Modern mobile networks are evolving into intelligent, programmable, trusted layers that affect AI model performance, edge optimisation, fraud prevention and digital‑identity security.”

The “Trusted Connections_ Ethical AI in Telecom & 6G Networks” source discusses how telecom networks are becoming platforms for ethical AI and security, providing context for this claim [S81].

External Sources (82)
S1
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — -Mr. Julian Gorman: Representative from GSMA, expert in telecom industry collaboration and anti-scam initiatives across …
S2
Building Indias Digital and Industrial Future with AI — -Julian Gorman- Head of APAC GSMA
S3
Building Indias Digital and Industrial Future with AI — – Deepak Maheshwari- Rahul Vatts – Rahul Vatts- Deepak Maheshwari
S4
Building Indias Digital and Industrial Future with AI — Agreed with:Debashish Chakraborty, Rahul Vatts — Telecom networks have evolved from passive connectivity providers to in…
S5
Building Indias Digital and Industrial Future with AI — By the way, your team is also working extensively with the GSM team on the GSM OpenGate APIs. Many of them have been eve…
S6
Building Indias Digital and Industrial Future with AI — By the way, your team is also working extensively with the GSM team on the GSM OpenGate APIs. Many of them have been eve…
S7
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S8
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S9
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S10
Building Indias Digital and Industrial Future with AI — -Debashish Chakraborty- Moderator, represents GSMA
S11
Building Indias Digital and Industrial Future with AI — Agreed with:Debashish Chakraborty, Mansi Kedia — Integration and collaboration are essential to avoid duplication and ma…
S12
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S13
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S14
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S15
Building Indias Digital and Industrial Future with AI — -Mansi Kedia- Representative from World Bank
S16
Building Indias Digital and Industrial Future with AI — This GSMA panel discussion focused on the convergence of AI, telecommunications, and data sovereignty within India’s dig…
S17
AI as critical infrastructure for continuity in public services — Pramod argues that true data sovereignty goes beyond simply storing data locally. It requires having control over jurisd…
S18
Cloud computing and data localisation: Lessons on jurisdiction — Complex cross-border concerns require international co-operation to avoid undermining the Internet’s universality. Despi…
S19
Empowering People with Digital Public Infrastructure — Hoda Al Khzaimi: Great question, I think, Brendan. When we talk about DPI, it’s the intersectionality between what’s h…
S20
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — Desire Kachenje: So, I think one of the key things that a lot of us are hearing, and what we’re seeing in the continent,…
S21
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S22
WS #257 Emerging Norms for Digital Public Infrastructure — AUDIENCE: Thanks, Milton. I agree very much with Anirudh. I think digital infrastructure, my understanding, what I thi…
S23
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — Evidence:He describes a hierarchy of decision-making with super advanced agents at the top and more fine-grained agents …
S24
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — The accountability mechanisms, transparency, rule of law, and explainability are crucial
S25
Telecommunications infrastructure — Network operators increasingly rely on AI for a wide range of tasks, fromnetwork planning(e.g. using algorithms to ident…
S26
High-level AI Standards panel — Need for Enhanced Collaboration Among Standards Organizations The UK government advocates for an open, inclusive, multi…
S27
What is it about AI that we need to regulate? — The question of achieving interoperability of data systems and data governance arrangements across different stakeholder…
S28
The State of Digital Fragmentation (Digital Policy Alert) — The analysis also focuses on the fragmentation that occurs between those who can engage and participate in the digital e…
S29
How to make AI governance fit for purpose? — – Jennifer Bachus- Anne Bouverot- Shan Zhongde- Gabriela Ramos – Jennifer Bachus- Shan Zhongde International Cooperati…
S30
The Future of Public Safety AI-Powered Citizen-Centric Policing in India — This example demonstrates that scale is achievable when solutions genuinely address user needs. The ‘meeting halfway’ co…
S31
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S32
AI for agriculture Scaling Intelegence for food and climate resiliance — It is being designed as a replicable public infrastructure model for India and the entire global south. In partnership w…
S33
Building Indias Digital and Industrial Future with AI — Summary:Mansi advocates for flexible blueprints over prescriptive standards, while Speaker 1 emphasizes the need for sta…
S34
Building Indias Digital and Industrial Future with AI — India’s approach to digital public infrastructure has gained international recognition, with President Macron recently m…
S35
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S36
20 Keywords for the Digital 2020s: A Digital Policy Prediction Dictionary — For instance, China has achieved an unusually high level of sovereignty with its legal requirement that Chinese citizens…
S37
Day 0 Event #257 Enhancing Data Governance in the Public Sector — Belli defines digital sovereignty as a nation’s ability to understand, develop, and regulate digital technologies to mai…
S38
Global Internet Governance Academic Network Annual Symposium | Part 3 | IGF 2023 Day 0 Event #112 — Adio Adet Dinika:All right. Wonderful. Thanks for that. So, quickly moving on to the Crimean postcolonial critique, basi…
S39
Trusted Connections_ Ethical AI in Telecom & 6G Networks — Evidence:In July 2023, TRI issued recommendations on leveraging artificial intelligence and big data in the telecommunic…
S40
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — After having generated this path, it also sends out a series of routine legal requests that we require for most investig…
S41
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — Evidence:He describes a hierarchy of decision-making with super advanced agents at the top and more fine-grained agents …
S42
Data embassies: Protecting nations in the cloud — The cases of Estonia and Monaco have shown that keeping data localised within a single facility or a specific geographic…
S43
WS #180 Protecting Internet data flows in trade policy initiatives — Jennifer Brody: Sure, my pleasure. And thank you so much for having me here today. It’s a real honor and a pleasure. …
S44
Comprehensive Summary: UN CSTD Working Group on Data Governance Progress Discussion — Renata Avila highlighted the broader political context, noting that “all the uncertainty around the existing governance …
S45
AI as critical infrastructure for continuity in public services — The discussion revealed that data sovereignty encompasses more than simple data localization. As Pramod noted, true sove…
S46
Supply Chain Fortification: Safeguarding the Cyber Resilience of the Global Supply Chain — Moreover, it is suggested that tech companies should focus on building sovereign versions of their technology and offeri…
S47
Building Indias Digital and Industrial Future with AI — Thank you, Julian. Thanks for the opening remarks. Am I audible? Looks like yes. So let’s begin. We have a fantastic pan…
S48
Building Indias Digital and Industrial Future with AI — Good morning, everyone. Warm welcome, distinguished guests, colleagues and partners and speakers who have joined us toda…
S49
AI as critical infrastructure for continuity in public services — Pramod argues that true data sovereignty goes beyond simply storing data locally. It requires having control over jurisd…
S50
What is it about AI that we need to regulate? — The question of achieving interoperability of data systems and data governance arrangements across different stakeholder…
S51
High-level AI Standards panel — Need for Enhanced Collaboration Among Standards Organizations The UK government advocates for an open, inclusive, multi…
S52
Internet Fragmentation: Perspectives &amp; Collaboration | IGF 2023 WS #405 — Efforts are ongoing to streamline internet governance legislation globally. The objective is to develop a cohesive frame…
S53
How to make AI governance fit for purpose? — – Jennifer Bachus- Shan Zhongde International Cooperation and Standards Legal and regulatory | Infrastructure Role of…
S54
The Future of Public Safety AI-Powered Citizen-Centric Policing in India — This example demonstrates that scale is achievable when solutions genuinely address user needs. The ‘meeting halfway’ co…
S55
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — – Indian models beating global benchmarks on India-specific use cases, such as OCR for handwritten notes in Indian langu…
S56
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S57
https://app.faicon.ai/ai-impact-summit-2026/keynote-nikesh-arora — India has already shown the world what is possible when innovation is paired with inclusion through digital public infra…
S58
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Thank you. A very warm good morning to all of you, and thank you, Business France, for having me here. It’s a pleasure t…
S59
AI for agriculture Scaling Intelegence for food and climate resiliance — It is being designed as a replicable public infrastructure model for India and the entire global south. In partnership w…
S60
Keynote Address_Revanth Reddy_Chief Minister Telangana — Overall Tone:The tone was consistently ambitious, urgent, and nationalistic throughout. The speaker maintained an inspir…
S61
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — The tone was consistently optimistic, collaborative, and forward-looking throughout the session. It maintained a formal …
S62
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S63
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S64
Keynote-Rishi Sunak — Overall Tone:The tone was consistently optimistic and inspirational throughout. Sunak maintained an enthusiastic, forwar…
S65
Policy Network on Internet Fragmentation (PNIF) — Dhruv emphasized that technical layer fragmentation poses the highest risk from their perspective, as it would have the …
S66
Critical Infrastructure in the Digital Age: From Deep Sea Cables to Orbital Satellites — The discussion maintained a balanced tone that was simultaneously informative and concerning. It began with an education…
S67
Panel 2 – Anticipating and Mitigating Risks Along the Global Subsea Network  — The discussion maintained a professional, collaborative tone throughout, with participants demonstrating technical exper…
S68
Can a layered policy approach stop Internet fragmentation? | IGF 2023 WS #273 — Another viewpoint suggests examining fragmentation in terms of time and driving factors. The speaker emphasizes the need…
S69
Designing Indias Digital Future AI at the Core 6G at the Edge — The discussion maintained an optimistic and forward-looking tone throughout, characterized by technical expertise and st…
S70
Fireside Chat The Future of AI & STEM Education in India — The discussion maintained an optimistic yet realistic tone throughout. It began with cautious acknowledgment of AI’s dis…
S71
Connecting the Unconnected in the field of Education Excellence, Cyber Security &amp; Rural Solutions and Women Empowerment in ICT — The discussion maintained a consistently positive and celebratory tone throughout, with speakers expressing pride in Ind…
S72
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm abo…
S73
WS #279 AI: Guardian for Critical Infrastructure in Developing World — The tone of the discussion was largely informative and collaborative. Speakers shared insights from their various backgr…
S74
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — 978 words | 155 words per minute | Duration: 377 secondss So I think, firstly, India’s journey in DPIs has been a fasci…
S75
WS #257 Data for Impact Equitable Sustainable DPI Data Governance — Digital Public Infrastructure (DPI) is a key driver of national digital transformation, fostering inclusive innovation a…
S76
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And accessibility has to be also broadened in terms of multi -modality and also, where necessary, include a human in the…
S77
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Taneja argued that India is uniquely positioned to lead in AI deployment due to its status as the world’s strongest grow…
S78
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — Now coming back to my address, proposed address, I’m coming back to this now. It’s indeed a privilege to participate in …
S79
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — The discussion featured Hemant Taneja, CEO of General Catalyst venture capital firm, speaking at an AI summit about resp…
S80
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — Patel argues that India’s digital infrastructure, particularly the Aadhaar common identity system and UPI payment system…
S81
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — Distinguished leaders from the technology companies, from telecom service providers and industry associations, represent…
S82
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And thank you. And maybe I will introduce a few of them. Agri -Co is transforming agriculture through digital tools that…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
Julian Gorman
2 arguments159 words per minute627 words235 seconds
Argument 1
Networks as intelligent, programmable, trusted layer essential for AI and public services (Julian Gorman)
EXPLANATION
Julian explains that mobile networks are no longer just connectivity providers; they have become intelligent, programmable, and trusted layers that enable AI models, edge services, fraud prevention, and secure digital identity. This shift positions networks as core components of national digital public infrastructure.
EVIDENCE
He states that “Today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure and they’re shaping how AI models perform… and how fraud is stopped before it happens and how digital identity remains secure” [14]. He also notes that networks now support core DPI functions such as identity verification, payments, and emergency response [15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session transcript notes that today’s mobile networks are becoming intelligent, programmable and trusted layers of national infrastructure, shaping AI model performance, edge services, fraud prevention and digital identity security [S2][S4].
MAJOR DISCUSSION POINT
Evolution of telecom networks into AI‑enabled infrastructure
AGREED WITH
Debashish Chakraborty, Rahul Vatts, Speaker 1
Argument 2
Sovereignty extends beyond data localisation to strategic control of infrastructure, standards and AI models (Julian Gorman)
EXPLANATION
Julian argues that in an AI‑driven world, digital sovereignty is not just about where data is stored but also about controlling the underlying infrastructure, standards, and the intelligence that powers national systems. Countries need the ability to manage these elements to ensure safe, interoperable public infrastructure.
EVIDENCE
He says, “In an AI-driven world, sovereignty is no longer just about where the data is stored, it’s about having strategic control over the infrastructure” and that “the key to this is the ability to manage the infrastructure, the standards, and increasingly, the intelligence that underpins the national digital system” [19-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion on data sovereignty highlights that it goes beyond localisation to include control over standards, decision-making systems and strategic autonomy, echoing the speaker’s point [S2][S4][S17].
MAJOR DISCUSSION POINT
Broadening the concept of digital sovereignty
AGREED WITH
Mansi Kedia, Speaker 1, Debashish Chakraborty
DISAGREED WITH
Rahul Vatts, Deepak Maheshwari
R
Rahul Vatts
3 arguments179 words per minute2128 words712 seconds
Argument 1
Airtel’s network as the trust foundation for UPI, OTP, fraud mitigation and large‑scale transactions (Rahul Vatts)
EXPLANATION
Rahul highlights that Airtel’s extensive connectivity infrastructure underpins India’s massive digital public services such as UPI, OTP‑based verification, and fraud‑prevention mechanisms, enabling billions of transactions with high trust.
EVIDENCE
He cites that “India transacted 28 lakh crores rupees of money through its UPI infrastructure in January alone, spread across a billion people” and that this rests on the “connectivity layer” with “more than a million BTSs” and “500 lakh kilometres of fiber” [51-55]. He also describes OTP/SMS as a trust layer [55-58] and Aadhaar-enabled payments processed in under 2 ms [59-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel describes the telecom trust layer that underpins India’s massive digital payments ecosystem, citing fraud-prevention products, OTP friction and billions of UPI transactions enabled by Airtel’s connectivity infrastructure [S2][S4].
MAJOR DISCUSSION POINT
Network as trust layer for digital payments
AGREED WITH
Julian Gorman, Debashish Chakraborty, Speaker 1
Argument 2
Sovereign cloud requires local data residency, control‑plane ownership, and protection from foreign jurisdiction (Rahul Vatts)
EXPLANATION
Rahul outlines four dimensions of true data sovereignty: physical residency of data, control‑plane ownership within the country, operational control over network upgrades, and protection from foreign legal reach such as the US CLOUD Act.
EVIDENCE
He lists the slices: data residing in the country, control-plane of the cloud being in India, operational sovereignty (where patches are applied), and jurisdictional sovereignty (exposure to foreign laws) [235-256]. He also mentions Airtel’s own sovereign cloud offering and its capacity to handle 140 crore transactions per second [262-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vatts outlines four dimensions of true data sovereignty – physical residency, control-plane ownership, operational control and jurisdictional protection – and other experts stress the need for control over encryption keys and legal reach [S2][S17][S18].
MAJOR DISCUSSION POINT
Components of a sovereign cloud
AGREED WITH
Julian Gorman, Deepak Maheshwari, Mansi Kedia
Argument 3
Airtel’s export of DPI solutions to Africa demonstrates practical transfer of infrastructure, identity and payment services (Rahul Vatts)
EXPLANATION
Rahul describes how Airtel is replicating India’s digital public infrastructure blueprint in African markets, providing hardware, software, and cloud services to enable identity, payments, and other public services.
EVIDENCE
He explains that Airtel is “in conversation with a lot of African leaders to transplant the India stack onto the African ecosystem” by offering a bundle of hardware, software, and a “air-capped cloud” to build digital public infrastructure in those countries [64-70].
MAJOR DISCUSSION POINT
Exporting India’s DPI model
AGREED WITH
Deepak Maheshwari, Mansi Kedia, Julian Gorman
S
Speaker 1
3 arguments159 words per minute1687 words633 seconds
Argument 1
TSPs provide contextual data enrichment via open APIs, turning raw connectivity into decision‑making fabric (Speaker 1)
EXPLANATION
The speaker explains that Telecom Service Providers (TSPs) add value by enriching raw network data with context, making it usable for banks and other institutions to make real‑time decisions, and they expose this enriched data through open APIs.
EVIDENCE
He mentions four key words, including “context and enrichment” and describes how TSPs “provide a large amount of context-driven information” that can be consumed by banks for fraud detection and authentication [82-84][101-108]. He also notes the creation of open APIs such as FRI and the Digital Intelligence Platform that expose this enriched data to financial institutions [98-101][119-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker explains that telecom service providers expose rich, context-driven information through open APIs (e.g., FRI, Digital Intelligence Platform) that banks use for real-time fraud detection and credit assessment [S2][S4].
MAJOR DISCUSSION POINT
Contextual enrichment by TSPs
AGREED WITH
Julian Gorman, Debashish Chakraborty, Rahul Vatts
Argument 2
Open APIs and collaborative TSP frameworks prevent parallel, fragmented DPI layers and ensure complementarity (Speaker 1)
EXPLANATION
The speaker argues that collaboration among a limited number of TSPs, using converged platforms and open APIs, avoids duplication of effort and ensures that new DPI layers complement existing operator‑led capabilities.
EVIDENCE
He describes how “all three of us, four of us are operated in converged platform” and that they work together with DOT to set up open APIs for institutions, preventing parallel DPI layers [112-119][120-126].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists describe a converged platform where a limited set of TSPs work together with the DOT to set up open APIs, avoiding duplicated digital-public-infrastructure layers [S2][S4].
MAJOR DISCUSSION POINT
Avoiding duplication through open APIs
AGREED WITH
Julian Gorman, Mansi Kedia, Debashish Chakraborty
Argument 3
Need for explainability, accountability and referenceable standards/playbooks to govern AI decisions in telecom (Speaker 1)
EXPLANATION
The speaker stresses that AI‑driven telecom services must be explainable and accountable, requiring referenceable standards or playbooks that can be adapted by operators, and that regulators need to evolve to support these requirements.
EVIDENCE
He notes that “when we leverage AI, we would want the AI to explain” and calls for a “referenceable standard” or playbook to guide implementations [285-289]. He also discusses the need for frameworks covering explainability, accountability, and security, and that standards must be adaptable to individual enterprises [310-313].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion flags a tension between AI explainability and security in fraud-prevention, calls for referenceable standards/playbooks, and cites broader AI governance concerns such as guardrails and accountability [S4][S23][S24].
MAJOR DISCUSSION POINT
Governance frameworks for AI in telecom
AGREED WITH
Debashish Chakraborty, Rahul Vatts
DISAGREED WITH
Mansi Kedia
D
Deepak Maheshwari
3 arguments172 words per minute1833 words637 seconds
Argument 1
Sovereignty must balance physical control, citizen agency, and participation in global standard‑setting (Deepak Maheshwari)
EXPLANATION
Deepak outlines three layers of data sovereignty: physical and administrative control for sensitive data, citizen‑driven choice over personal data usage, and active participation in global standard‑setting bodies to avoid a one‑way lock‑in.
EVIDENCE
He describes the three perspectives: (1) data that must stay within India for defence and finance [157-159]; (2) citizen data where individuals may wish to share data abroad, e.g., visa applications [160-163]; (3) business data where India is a global outsourcing hub but must balance inbound and outbound data flows, noting the need for participation in standards bodies like GSMA, ISO, ITU, IEEE [164-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The conversation expands sovereignty to three layers – physical/administrative control, citizen-driven data sharing choices, and active participation in global standards bodies – aligning with the multi-dimensional view discussed in the session [S2][S4][S17].
MAJOR DISCUSSION POINT
Multi‑dimensional view of data sovereignty
AGREED WITH
Julian Gorman, Rahul Vatts, Mansi Kedia
DISAGREED WITH
Julian Gorman, Rahul Vatts
Argument 2
India’s DPI model is open, royalty‑free and adaptable, enabling other nations to adopt without proprietary lock‑in (Deepak Maheshwari)
EXPLANATION
Deepak emphasizes that India’s Digital Public Infrastructure framework is based on open protocols and does not charge licensing fees, allowing other countries to adopt, adapt, and scale the model freely.
EVIDENCE
He states that “India’s DPI-led model… nothing of that sort is going… it’s a framework… an open protocol… India doesn’t ask for that type of thing” [318-327].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel notes that India’s Digital Public Infrastructure is built on open protocols with no licensing fees, allowing other countries to replicate and adapt the model freely [S2][S4].
MAJOR DISCUSSION POINT
Open, non‑proprietary DPI model
AGREED WITH
Rahul Vatts, Mansi Kedia, Julian Gorman
Argument 3
India offers an open, non‑proprietary DPI framework supported by diplomatic channels, enabling replication in developing economies (Deepak Maheshwari)
EXPLANATION
Deepak points out that beyond the technical framework, India leverages diplomatic and research institutions to promote the DPI model globally, providing soft‑diplomacy and capacity‑building support to the Global South.
EVIDENCE
He mentions the involvement of the Ministry of External Affairs, Indian Council of World Affairs, and a policy brief “Global South’s AI Pivot” to promote the model, highlighting diplomatic enablement [330-340].
MAJOR DISCUSSION POINT
Diplomatic support for DPI diffusion
M
Mansi Kedia
2 arguments171 words per minute953 words334 seconds
Argument 1
Siloed public‑private systems miss efficiency, innovation and trust; global standards or flexible blueprints are essential (Mansi Kedia)
EXPLANATION
Mansi argues that when public and private digital systems are built in isolation, they create vulnerabilities, reduce efficiency, and stifle innovation; therefore, global standards or adaptable blueprints are needed to create trusted, efficient ecosystems.
EVIDENCE
She notes that “systems coming together help build trust… independent systems mean more points of vulnerability” and that silos miss “efficiency, innovation, and building trusted ecosystems” [204-218]. She also differentiates standards (prescriptive) from blueprints (flexible) and stresses the need for both [219-232].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Other participants stress that DPI must be integrated across sectors, with global standards or adaptable blueprints needed to avoid fragmented, vulnerable systems and to foster efficiency and innovation [S20][S21][S22].
MAJOR DISCUSSION POINT
Risks of siloed digital infrastructure
AGREED WITH
Julian Gorman, Speaker 1, Debashish Chakraborty
DISAGREED WITH
Speaker 1
Argument 2
World Bank views India’s DPI experience as a proven blueprint for emerging markets, informing digital development strategies (Mansi Kedia)
EXPLANATION
Mansi explains that the World Bank has documented India’s DPI as a benchmark, producing reports and blueprints that other countries can adapt, highlighting India’s scale and heterogeneity as evidence of what works.
EVIDENCE
She references the World Bank’s “digital public infrastructure and development report” that outlines principles, objectives, and definitions of DPI, and notes that the Bank uses blueprints to guide other nations [353-368].
MAJOR DISCUSSION POINT
India’s DPI as a global blueprint
AGREED WITH
Rahul Vatts, Deepak Maheshwari, Julian Gorman
D
Debashish Chakraborty
2 arguments127 words per minute1070 words503 seconds
Argument 1
Networks have shifted from passive carriers to active contributors to governance, resilience and trust (Debashish Chakraborty)
EXPLANATION
Debashish observes that telecom networks have evolved beyond voice and broadband to become intelligent platforms that support AI, digital identity, fraud mitigation, and sovereign data handling, thereby playing a governance role.
EVIDENCE
He states that “today’s network are no longer passive carriers of data. They are becoming intelligent platforms where AI is deployed… digital identity is authenticated, where fraud is mitigated, where sovereignty over data and decision-making is increasingly exercised” [39-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Speakers observe that modern networks are no longer passive data pipes but intelligent platforms supporting AI, digital identity, fraud mitigation and sovereign data handling, thereby playing a governance role [S2][S4].
MAJOR DISCUSSION POINT
Network evolution to governance role
AGREED WITH
Julian Gorman, Rahul Vatts, Speaker 1
Argument 2
Emerging policy frictions stem from expanding operator roles, data privacy mandates and the definition of digital intermediaries (Debashish Chakraborty)
EXPLANATION
Debashish raises concerns that new layers of digital public infrastructure risk duplication unless coordinated with existing operator capabilities, and that regulatory definitions around digital intermediaries and privacy are becoming friction points.
EVIDENCE
He asks how to ensure MNO-added layers complement rather than duplicate “Open Gateway APIs” and mentions “parallel digital infrastructure structures” as a concern [76-78]. Later he references regulatory challenges around digital intermediaries and data privacy [94-96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Debashish raises concerns about parallel digital-infrastructure layers, the need for coordination with existing operator capabilities, and regulatory challenges around digital intermediaries and privacy [S4][S2].
MAJOR DISCUSSION POINT
Policy frictions with AI‑enabled networks
AGREED WITH
Speaker 1, Rahul Vatts
A
Audience
1 argument144 words per minute186 words77 seconds
Argument 1
Proposal for “data‑embassy” wearable (KYC ring) that stores personal data locally with cryptographic safeguards (Audience)
EXPLANATION
An audience member suggests a wearable ring that would hold a person’s KYC and medical records securely on the device, using encryption and blockchain to ensure data is only accessible with consent and remains protected if the device leaves the body.
EVIDENCE
The participant describes an “Adha ring” that would store KYC and medical records, remain encrypted when removed, require a second key for access, and be recorded on a blockchain as a data embassy [371-374].
MAJOR DISCUSSION POINT
Wearable data‑embassy concept
DISAGREED WITH
Rahul Vatts, Deepak Maheshwari
Agreements
Agreement Points
Telecom networks have evolved into intelligent, programmable, trusted layers that enable AI-driven public services and digital trust
Speakers: Julian Gorman, Debashish Chakraborty, Rahul Vatts, Speaker 1
Networks as intelligent, programmable, trusted layer essential for AI and public services (Julian Gorman) Networks have shifted from passive carriers to active contributors to governance, resilience and trust (Debashish Chakraborty) Airtel’s network as the trust foundation for UPI, OTP, fraud mitigation and large‑scale transactions (Rahul Vatts) TSPs provide contextual data enrichment via open APIs, turning raw connectivity into decision‑making fabric (Speaker 1)
All speakers emphasize that modern telecom infrastructure is no longer a simple connectivity pipe but an intelligent platform that powers AI models, fraud prevention, digital identity and large-scale financial transactions, thereby becoming a core component of national digital public infrastructure [14-15][39-41][51-55][82-84][101-108].
POLICY CONTEXT (KNOWLEDGE BASE)
The evolution of programmable telecom networks for AI-driven public services is reflected in the TRI’s risk-based regulatory recommendations for AI in telecom, which stress trusted infrastructure and high-risk oversight [S39], and in analyses of telecom guardrails for critical services [S41].
Digital sovereignty must go beyond data localisation to include strategic control over infrastructure, standards and AI models
Speakers: Julian Gorman, Deepak Maheshwari, Rahul Vatts, Mansi Kedia
Sovereignty extends beyond data localisation to strategic control of infrastructure, standards and AI models (Julian Gorman) Sovereignty must balance physical control, citizen agency, and participation in global standard‑setting (Deepak Maheshwari) Sovereign cloud requires local data residency, control‑plane ownership, and protection from foreign jurisdiction (Rahul Vatts) Siloed public‑private systems miss efficiency, innovation and trust; global standards or flexible blueprints are essential (Mansi Kedia)
The panel agrees that true data sovereignty involves physical residency, control-plane ownership, operational autonomy and participation in global standard-setting, rather than merely where data is stored [19-20][157-176][235-256][219-232].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions emphasize that digital sovereignty extends beyond mere data localisation to strategic control of infrastructure, standards and AI models, as highlighted by the EU’s GAIA-X sovereign cloud initiative and broader sovereignty debates [S36], and by definitions that link sovereignty to regulatory and technical self-determination [S37], as well as calls for control over legal frameworks and encryption keys [S45].
India’s open, royalty‑free DPI model can be replicated globally without proprietary lock‑in
Speakers: Rahul Vatts, Deepak Maheshwari, Mansi Kedia, Julian Gorman
Airtel’s export of DPI solutions to Africa demonstrates practical transfer of infrastructure, identity and payment services (Rahul Vatts) India’s DPI model is open, royalty‑free and adaptable, enabling other nations to adopt without proprietary lock‑in (Deepak Maheshwari) World Bank views India’s DPI experience as a proven blueprint for emerging markets, informing digital development strategies (Mansi Kedia) Global standards matter to ensure interoperable, safe AI‑enabled public infrastructure (Julian Gorman)
All agree that India’s DPI framework, built on open protocols and without licensing fees, provides a scalable blueprint that can be exported to other regions, supported by standards and multilateral guidance [64-70][318-327][353-368][22-24].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s open, royalty-free digital public infrastructure (DPI) model has been cited internationally as a benchmark, with President Macron noting DPI as India’s biggest export and the World Bank highlighting its replicability without proprietary lock-in [S34], while broader analyses of global DPI investments underscore its relevance [S35].
Open APIs, standards and collaborative frameworks are essential to avoid fragmented parallel DPI layers
Speakers: Julian Gorman, Mansi Kedia, Speaker 1, Debashish Chakraborty
Sovereignty extends beyond data localisation to strategic control of infrastructure, standards and AI models (Julian Gorman) Siloed public‑private systems miss efficiency, innovation and trust; global standards or flexible blueprints are essential (Mansi Kedia) Open APIs and collaborative TSP frameworks prevent parallel, fragmented DPI layers and ensure complementarity (Speaker 1) Emerging policy frictions stem from parallel digital infrastructure structures and need coordination with Open Gateway APIs (Debashish Chakraborty)
Consensus that interoperable standards, open APIs and coordinated TSP efforts are needed to prevent duplication and ensure that new DPI layers complement existing operator capabilities [22-24][219-232][112-126][76-78].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for open APIs, standards and collaborative frameworks to avoid fragmented DPI layers aligns with the World Bank’s distinction between rigid standards and flexible blueprints for digital infrastructure development [S33] and with the push for interoperable public platforms in global DPI initiatives [S35].
AI‑driven telecom services require explainability, accountability and referenceable governance frameworks
Speakers: Speaker 1, Debashish Chakraborty, Rahul Vatts
Need for explainability, accountability and referenceable standards/playbooks to govern AI decisions in telecom (Speaker 1) Emerging policy frictions stem from expanding operator roles, data privacy mandates and the definition of digital intermediaries (Debashish Chakraborty) Sovereign cloud requires selective data residency and control, highlighting jurisdictional accountability (Rahul Vatts)
Speakers converge on the need for clear governance, explainability and accountability mechanisms for AI-enabled network functions, alongside regulatory clarity on data control and digital intermediaries [285-289][94-96][235-256].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for explainability, accountability and governance frameworks for AI-driven telecom services echo the TRI’s recommendation for a risk-based regulatory framework that mandates transparency for high-risk AI applications in telecom [S39] and the broader view of AI as critical infrastructure requiring robust governance [S45].
Similar Viewpoints
Both stress that sovereignty is multi‑dimensional: beyond mere data localisation it includes control over infrastructure, legal jurisdiction and active participation in standards bodies [235-256][157-176].
Speakers: Rahul Vatts, Deepak Maheshwari
Sovereign cloud requires local data residency, control‑plane ownership, and protection from foreign jurisdiction (Rahul Vatts) Sovereignty must balance physical control, citizen agency, and participation in global standard‑setting (Deepak Maheshwari)
Both highlight the importance of open, adaptable frameworks (blueprints or standards) that can be reused by other countries without restrictive licensing [219-232][318-327].
Speakers: Mansi Kedia, Deepak Maheshwari
Siloed public‑private systems miss efficiency, innovation and trust; global standards or flexible blueprints are essential (Mansi Kedia) India’s DPI model is open, royalty‑free and adaptable, enabling other nations to adopt without proprietary lock‑in (Deepak Maheshwari)
Both call for referenceable, adaptable standards or playbooks to ensure trustworthy, interoperable AI‑enabled telecom services [285-289][219-232].
Speakers: Speaker 1, Mansi Kedia
Need for explainability, accountability and referenceable standards/playbooks to govern AI decisions in telecom (Speaker 1) Siloed public‑private systems miss efficiency, innovation and trust; global standards or flexible blueprints are essential (Mansi Kedia)
Unexpected Consensus
Support for a wearable “data‑embassy” concept to store personal KYC/medical data locally
Speakers: Audience, Deepak Maheshwari
Proposal for “data‑embassy” wearable (KYC ring) that stores personal data locally with cryptographic safeguards (Audience) I would say yes if it is on reciprocal basis (Deepak Maheshwari)
A policy researcher (Deepak) unexpectedly endorses a consumer-focused wearable data-embassy idea, indicating openness to novel data-sovereignty mechanisms beyond institutional frameworks [371-374][375].
POLICY CONTEXT (KNOWLEDGE BASE)
The wearable ‘data-embassy’ concept builds on the emerging model of data embassies that keep data outside single geographic boundaries to mitigate crises, as demonstrated by Estonia and Monaco’s cloud-based data-embassy pilots [S42], and responds to concerns about the fragility of existing data-protection regimes [S44].
Telecom operators advocating for both use of hyperscaler clouds and a sovereign cloud offering
Speakers: Rahul Vatts, Julian Gorman
Sovereign cloud requires local data residency, control‑plane ownership, and protection from foreign jurisdiction (Rahul Vatts) Countries need to build AI‑enabled public infrastructure that is safe, interoperable, and aligned with national priorities while staying connected to global markets (Julian Gorman)
Rahul stresses the need for a sovereign cloud yet also acknowledges the efficiencies of hyperscaler clouds, aligning with Julian’s call for interoperable yet sovereign AI-enabled infrastructure-an unexpected harmony between national control and global integration [235-256][22-24].
POLICY CONTEXT (KNOWLEDGE BASE)
Telecom operators’ dual advocacy for hyperscaler clouds and sovereign cloud offerings reflects the trend toward sovereign versions of hyperscaler services, highlighted in analyses of future cloud markets [S46] and the EU’s GAIA-X sovereign cloud strategy [S36].
Overall Assessment

The panel shows strong convergence on four core themes: (1) telecom networks are now intelligent, AI‑enabled public infrastructure; (2) digital sovereignty must encompass control over infrastructure, standards and legal jurisdiction; (3) India’s open, royalty‑free DPI model offers a replicable blueprint for the Global South; (4) interoperable standards, open APIs and collaborative governance frameworks are essential to avoid fragmentation and ensure trust. These shared positions cut across ICT for development, AI, data governance and the enabling environment, indicating a high level of consensus that can drive coordinated policy and implementation actions.

High consensus – multiple speakers from industry, policy, and multilateral institutions repeatedly echo the same viewpoints, suggesting that concrete collaborative initiatives (e.g., standard‑setting, open‑API platforms, sovereign cloud frameworks) are feasible and likely to gain broad support.

Differences
Different Viewpoints
Definition and scope of digital/data sovereignty
Speakers: Julian Gorman, Rahul Vatts, Deepak Maheshwari
Sovereignty extends beyond data localisation to strategic control of infrastructure, standards and AI models (Julian Gorman) Sovereign cloud requires local data residency, control‑plane ownership, and protection from foreign jurisdiction (Rahul Vatts) Sovereignty must balance physical control, citizen agency, and participation in global standard‑setting (Deepak Maheshwari)
Julian stresses that sovereignty is about strategic control over infrastructure, standards and AI, not just where data sits [19-20]. Rahul breaks sovereignty into four technical slices – data residency, control-plane location, operational control and jurisdictional exposure [235-256]. Deepak adds a three-layer view: physical/administrative control for sensitive data, citizen-driven choice for personal data, and active participation in global standards bodies [157-176]. The speakers therefore disagree on which elements are primary and how broadly sovereignty should be framed.
POLICY CONTEXT (KNOWLEDGE BASE)
The definition and scope of digital/data sovereignty are debated, with scholars defining it as a nation’s ability to develop and regulate digital technologies for self-determination [S37] and policy briefs expanding the concept to include control over legal frameworks, encryption keys and infrastructure management [S45].
Role of standards versus flexible blueprints for integrating public and private DPI layers
Speakers: Mansi Kedia, Speaker 1
Siloed public‑private systems miss efficiency, innovation and trust; global standards or flexible blueprints are essential (Mansi Kedia) Need for explainability, accountability and referenceable standards/playbooks to govern AI decisions in telecom (Speaker 1)
Mansi argues that strict standards are too prescriptive and that adaptable blueprints are needed to avoid silos and foster trust, efficiency and innovation [204-218][219-232]. Speaker 1 calls for referenceable standards or playbooks to ensure AI explainability and accountability, implying that standards can be the main governance tool [285-289][310-313]. The disagreement lies in whether standards alone are sufficient or whether more flexible, blueprint-type guidance is required.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between prescriptive standards and flexible blueprints for integrating public and private DPI layers is captured in the World Bank’s discussion of the need for adaptable standards that can be tailored to context while still providing interoperability [S33].
Necessity of new wearable “data‑embassy” solution versus existing data security mechanisms
Speakers: Audience, Rahul Vatts, Deepak Maheshwari
Proposal for “data‑embassy” wearable (KYC ring) that stores personal data locally with cryptographic safeguards (Audience) Sovereign cloud requires local data residency, control‑plane ownership, and protection from foreign jurisdiction (Rahul Vatts) I would say yes if it is on reciprocal basis (Deepak Maheshwari)
An audience member proposes a ring that stores KYC/medical data locally with encryption and blockchain records [371-374]. Rahul counters that data security is already robust in Aadhaar and the real issue is governmental data handling, implying no need for such a device [376-381]. Deepak acknowledges the concept could work if based on reciprocal agreements [375]. The speakers thus disagree on whether a new wearable data-embassy is necessary.
POLICY CONTEXT (KNOWLEDGE BASE)
The necessity of a new wearable ‘data-embassy’ solution is examined against existing data-security mechanisms, with the data-embassy model offering resilience to geopolitical or disaster risks beyond traditional localisation approaches [S42].
Consistency of Rahul’s stance on sovereignty
Speakers: Rahul Vatts
Sovereign cloud requires local data residency, control‑plane ownership, and protection from foreign jurisdiction (Rahul Vatts)
Rahul initially outlines a detailed, multi-slice definition of sovereignty (data residency, control-plane, operational, jurisdictional) [235-256], but later states “there is no sovereignty which is involved” when referring to existing sovereign-cloud offerings [258]. This internal inconsistency reflects a contradictory position on the relevance of sovereignty in practice.
Unexpected Differences
Wearable data‑embassy proposal versus existing data security claims
Speakers: Audience, Rahul Vatts, Deepak Maheshwari
Proposal for “data‑embassy” wearable (KYC ring) that stores personal data locally with cryptographic safeguards (Audience) Sovereign cloud requires local data residency, control‑plane ownership, and protection from foreign jurisdiction (Rahul Vatts) I would say yes if it is on reciprocal basis (Deepak Maheshwari)
The audience’s innovative suggestion of a ring-based data embassy was met with Rahul’s assertion that current Aadhaar-based systems are already secure and that the problem lies elsewhere, not in the technology itself [376-381]. Deepak’s conditional acceptance adds a diplomatic nuance. The clash between a novel technical solution and the claim that existing mechanisms are sufficient was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates over the wearable data-embassy versus existing security claims reference the same data-embassy experiences in Estonia and Monaco, which illustrate how sovereign data hosting can complement or surpass conventional security measures [S42].
Rahul’s contradictory statements on the relevance of sovereignty
Speakers: Rahul Vatts
Sovereign cloud requires local data residency, control‑plane ownership, and protection from foreign jurisdiction (Rahul Vatts) there is no sovereignty which is involved (Rahul Vatts)
Rahul first provides a detailed, four-slice definition of data sovereignty, emphasizing its importance [235-256], but later dismisses the concept by saying “there is no sovereignty which is involved” when discussing sovereign-cloud offerings [258]. This internal inconsistency was unexpected.
Overall Assessment

The panel shows moderate disagreement centred on how to define and operationalise data sovereignty, the balance between strict standards and flexible blueprints, and the necessity of new technical solutions such as wearable data‑embassies. While participants share common goals—integrated, trustworthy DPI and global diffusion of India’s model—their preferred pathways diverge, reflecting differing priorities between strategic control, technical implementation, and regulatory flexibility.

Medium level of disagreement: substantive but not polarising. The divergences highlight the need for further consensus‑building on sovereignty frameworks and standards design to ensure coherent policy and industry action.

Partial Agreements
All three agree that the overarching goal is to avoid fragmented, parallel DPI structures and to build trusted, efficient ecosystems. Debashish raises the risk of duplicate operator‑led capabilities [76-78]; Speaker 1 proposes converged platforms and open APIs to prevent duplication [112-126]; Mansi stresses that standards or blueprints are needed to integrate systems and avoid silos [204-218]. They differ on the mechanism—API‑centric collaboration versus broader standards/blueprints.
Speakers: Debashish Chakraborty, Speaker 1, Mansi Kedia
Networks have shifted from passive carriers to active contributors to governance, resilience and trust (Debashish Chakraborty) Open APIs and collaborative TSP frameworks prevent parallel, fragmented DPI layers and ensure complementarity (Speaker 1) Siloed public‑private systems miss efficiency, innovation and trust; global standards or flexible blueprints are essential (Mansi Kedia)
All agree that India’s DPI model should be leveraged internationally. Rahul highlights Airtel’s export of DPI solutions to Africa as a practical transfer [64-70]; Deepak stresses the open, non‑proprietary protocol that can be freely adopted [318-327]; Mansi points to World Bank reports that use India’s DPI as a benchmark for other countries [353-368]. The disagreement lies in the preferred pathway—private‑sector product bundles versus open protocol and multilateral blueprints.
Speakers: Rahul Vatts, Deepak Maheshwari, Mansi Kedia
Airtel’s network as the trust foundation for UPI, OTP, fraud mitigation and large‑scale transactions (Rahul Vatts) India’s DPI model is open, royalty‑free and adaptable, enabling other nations to adopt without proprietary lock‑in (Deepak Maheshwari) World Bank views India’s DPI experience as a proven blueprint for emerging markets, informing digital development strategies (Mansi Kedia)
Takeaways
Key takeaways
Telecom networks have evolved from passive connectivity to intelligent, programmable platforms that embed AI and support digital public infrastructure (DPI) services such as identity verification, payments, fraud mitigation, and emergency response. Networks are now active contributors to governance, resilience and trust, providing contextual data enrichment through open APIs that enable real‑time decision making for banks, regulators and other service providers. Data sovereignty in the AI era goes beyond physical data localisation; it requires control over the infrastructure, standards, AI models and the control‑plane, while still remaining interoperable with global markets. Open, interoperable standards and flexible blueprints are essential to avoid fragmented, parallel DPI layers and to foster efficiency, innovation and trusted ecosystems. Regulatory frameworks must evolve to address AI explainability, accountability, digital‑intermediary definitions and jurisdictional exposure (e.g., foreign cloud‑act requests). India’s DPI model—open, royalty‑free, and supported by diplomatic and multistakeholder channels—offers a scalable template for the Global South, demonstrated by Airtel’s export of DPI solutions to Africa. Collaboration among MNOs, regulators, standards bodies (GSMA, ITU, ISO, etc.) and development agencies (World Bank) is critical to create referenceable playbooks and standards for AI‑enabled networks.
Resolutions and action items
Commit to further collaboration on GSMA OpenGate APIs and their certification across operators (noted by Debashish). Explore the development of referenceable AI‑telecom standards/playbooks that address explainability, accountability and digital‑intermediary scope (suggested by the Vodafone Idea speaker). Continue sharing India’s DPI blueprints and open‑protocol designs with partner countries, especially in Africa, through diplomatic and development channels (highlighted by Rahul and Deepak). Investigate the feasibility of ‘data‑embassy’ wearable solutions for personal KYC/medical data storage, as raised by the audience member (Vijay Agarwal). Encourage participation of Indian stakeholders in global standard‑setting bodies to shape AI‑related standards rather than merely adopt them (emphasised by Deepak).
Unresolved issues
Specific regulatory mechanisms for AI explainability and accountability in telecom networks remain undefined. How to operationalise jurisdictional sovereignty (e.g., protection from foreign legal orders like the US CLOUD Act) for operator‑hosted AI models and data. The exact scope and obligations of telecom operators as ‘digital intermediaries’ under emerging data‑privacy laws are still unclear. Details on how to prevent duplication of DPI layers while integrating private‑sector capabilities need further concrete frameworks. Implementation pathways for the proposed wearable ‘data‑embassy’ concept, including standards, security and cross‑border recognition, were not resolved.
Suggested compromises
Adopt an “open yet sovereign” approach: maintain open, globally‑compatible standards while retaining national control over critical infrastructure and data. Use flexible blueprints instead of rigid standards where appropriate, allowing countries to adapt DPI implementations to local contexts while preserving interoperability. Selective data residency: keep sensitive citizen and security data within national borders while leveraging global hyperscale clouds for non‑critical workloads. Balance efficiency gains from private‑sector data (e.g., mobile usage for credit scoring) with public‑sector trust by providing open APIs and transparent governance. Encourage collaborative standard‑setting (contribute to GSMA, ITU, ISO) rather than unilateral control, ensuring mutual benefit and shared ownership of AI‑related protocols.
Thought Provoking Comments
In an AI‑driven world, sovereignty is no longer just about where the data is stored, it’s about having strategic control over the infrastructure, the standards and the intelligence that underpins the national digital system.
Shifts the discussion from a narrow focus on data localisation to a broader, more strategic view of digital sovereignty that includes standards, governance and AI‑enabled decision‑making.
Set the thematic foundation for the rest of the panel, prompting speakers to address how telecom networks can provide not just connectivity but also governance, control and trust. It led directly to deeper questions about standards, open APIs and the role of regulators.
Speaker: Julian Gorman
India transacted 28 lakh crores rupees through UPI in January alone, over a billion people, and that trust is built on a massive connectivity layer – more than a million BTSs, 500 lakh km of fiber, and thousands of edge data centres. The OTP/SMS layer is the trust fabric that makes these transactions possible.
Quantifies the scale of India’s digital public infrastructure and links the physical network directly to trust‑building mechanisms, illustrating why telecom is a critical public asset.
Provided concrete evidence that reinforced Julian’s claim about the network’s strategic role. It steered the conversation toward concrete use‑cases (payments, fraud detection) and prompted other panelists to discuss how that trust layer can be extended or duplicated without fragmentation.
Speaker: Rahul Vatts
Context and enrichment are the key value‑adds of the DPI ecosystem. By exposing enriched, real‑time data (e.g., Aadhaar verification concurrent with a call) via open APIs, TSPs enable banks and other services to make instant, informed decisions while remaining interoperable.
Introduces the concept of contextual data as a shared public good rather than a proprietary asset, highlighting how open APIs can prevent duplication and foster collaboration.
Shifted the dialogue from describing existing services to proposing a collaborative data‑sharing architecture. It prompted Rahul and others to reference OpenGateway APIs and set the stage for the later discussion on standards vs blueprints.
Speaker: Speaker 1 (representing the TSP community)
Digital sovereignty should be viewed in three layers: physical location of data, the local contextual relevance of that data, and the agency of citizens to decide where their data travels. Moreover, sovereignty is better achieved by contributing to global standards rather than trying to control them unilaterally.
Expands the definition of sovereignty beyond technical storage to include cultural context, citizen agency, and participation in standard‑setting bodies, challenging a purely protectionist stance.
Prompted a nuanced debate on the balance between national control and global interoperability. It influenced Mansi’s distinction between standards and blueprints and reinforced the call for collaborative governance.
Speaker: Deepak Maheshwari
Standards are prescriptive and enable commercialisation, whereas blueprints are flexible, adaptable frameworks that capture best practices without locking countries into a single implementation path.
Clarifies a common confusion in policy circles and offers a pragmatic way to reconcile the need for interoperability with the need for local adaptability.
Provided a conceptual tool that the panel used to discuss how India’s DPI can be exported to the Global South without imposing rigid standards. It also helped frame the later audience question about “data embassies” as a blueprint rather than a fixed standard.
Speaker: Mansi Kedia
Sovereignty can be sliced into four practical dimensions: (1) data residency, (2) control‑plane locality, (3) operational sovereignty (where patches and software updates originate), and (4) jurisdictional sovereignty (e.g., exposure to foreign legal orders like the US CLOUD Act).
Breaks down an abstract concept into actionable metrics that operators can assess, highlighting gaps in current cloud and AI deployments.
Deepened the technical discussion, leading other speakers to acknowledge the need for “sovereign cloud” offerings and to consider regulatory reforms. It also set up the later exchange on whether hyperscalers can truly be sovereign.
Speaker: Rahul Vatts
India’s DPI model is open‑protocol, royalty‑free and backed by diplomatic channels; it can be adopted by Global South countries without the IP‑licensing strings that often accompany foreign technology transfers.
Positions India’s approach as a scalable, non‑extractive alternative for developing economies, linking technology policy with soft‑power diplomacy.
Served as a turning point that moved the conversation from domestic challenges to international exportability. It reinforced Mansi’s point about blueprints and sparked interest from the audience about data‑embassy concepts.
Speaker: Deepak Maheshwari
Overall Assessment

The discussion was driven forward by a handful of high‑impact remarks that reframed the debate from a narrow technical focus to a strategic, multi‑dimensional view of digital sovereignty, trust, and interoperability. Julian’s opening set the agenda, while Rahul’s scale‑driven illustration grounded it in reality. The TSP speaker’s emphasis on context and open APIs, Deepak’s three‑layer sovereignty model, and Mansi’s standards‑vs‑blueprints distinction each introduced new analytical lenses that reshaped subsequent contributions. Rahul’s four‑slice sovereignty framework provided concrete metrics, prompting regulators and operators to consider practical policy adjustments. Finally, Deepak’s articulation of India’s open, diplomatic DPI export model pivoted the conversation toward global South relevance, tying together the earlier themes of openness, sovereignty, and collaborative standards. Collectively, these comments steered the panel from describing existing infrastructure to debating how to evolve it responsibly and inclusively on a global scale.

Follow-up Questions
How can we ensure that the efforts by MNOs adding new DPI trust layers complement rather than duplicate existing operator‑led capabilities such as the GSMA Open Gateway APIs?
Ensures interoperability, avoids fragmentation and leverages existing standards for efficient digital public infrastructure.
Speaker: Debashish Chakraborty
How should India define data sovereignty in an AI‑driven DPI era beyond mere data localisation, especially regarding control over standards, decision‑making systems, and long‑term strategic autonomy?
Clarifies the broader dimensions of sovereignty needed for AI‑enabled public infrastructure and informs policy formulation.
Speaker: Debashish Chakraborty
What are the risks when public digital infrastructure and private digital capabilities are built in silos, and why are global standards essential for accelerating inclusive digital outcomes?
Identifies potential inefficiencies, security gaps, and innovation loss, highlighting the need for coordinated standards.
Speaker: Debashish Chakraborty
What does data sovereignty practically mean for operators in terms of data storage, edge processing, cloud reliance, and control of AI models?
Seeks concrete operational criteria for telecoms to implement sovereign data practices while using AI.
Speaker: Debashish Chakraborty
What are the biggest policy frictions emerging as networks become AI‑driven platforms, and how can data‑sovereignty considerations address these regulatory challenges without slowing innovation?
Aims to pinpoint regulatory gaps (e.g., explainability, accountability) and explore sovereign‑centric solutions.
Speaker: Debashish Chakraborty
How can India leverage its DPI and telecom‑led digital architecture to provide a credible, scalable model for the Global South, especially for countries seeking digital sovereignty without technological isolation?
Explores the exportability of India’s model and its potential to support inclusive development in other emerging economies.
Speaker: Debashish Chakraborty
How do you see India’s DPI model shaping digital development strategies across emerging economies?
Seeks insight into the influence of India’s approach on policy and implementation in other nations.
Speaker: Debashish Chakraborty
Could a wearable product (e.g., a ring) store personal KYC/medical data securely on the device, using encryption and blockchain, and could such an approach support the concept of data embassies for India?
Proposes an innovative personal‑data‑sovereignty solution that raises technical, privacy, and regulatory questions.
Speaker: Vijay Agarwal (audience)
What referenceable standards, playbooks or blueprints are needed to ensure AI explainability and accountability in telecom‑driven fraud‑scam protection without compromising security?
Calls for concrete guidance to balance transparency of AI decisions with operational security.
Speaker: Martin (Speaker 1)
What regulatory framework should apply to telecom operators acting as digital intermediaries, especially concerning data‑privacy, purpose limitation, and monetisation of subscriber data?
Highlights ambiguity in existing laws and the need for clear rules for AI‑enabled telco services.
Speaker: Martin (Speaker 1)
How can sovereign cloud offerings be designed so that operators retain control over the control‑plane, operational sovereignty, and jurisdictional exposure while still benefiting from hyperscale efficiencies?
Seeks a model that reconciles local data control with the advantages of large‑scale cloud services.
Speaker: Rahul Vatts
What institutional mechanisms and incentives can ensure that India contributes to global standards (e.g., GSMA, ISO, ITU) while preserving strategic autonomy?
Addresses the need for a balanced participation strategy in multistakeholder standard bodies.
Speaker: Deepak Maheshwari
What is the optimal balance between prescriptive standards and flexible blueprints for implementing DPI in diverse country contexts?
Explores how to provide adaptable guidance without stifling local innovation or creating fragmentation.
Speaker: Mansi Kedia
What are the technical and policy challenges of adapting India’s DPI model (including open protocols and open‑source frameworks) for African countries, and how can these be systematically studied?
Calls for research on transferability, customization, and impact assessment of India’s DPI in Africa.
Speaker: Rahul Vatts
How can quantum‑resistant security measures be integrated into critical identity systems like Aadhaar to future‑proof data sovereignty?
Identifies emerging security threats and the need for research into quantum‑safe cryptography for national ID platforms.
Speaker: Rahul Vatts

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI-Driven Enforcement_ Better Governance through Effective Compliance & Services

AI-Driven Enforcement_ Better Governance through Effective Compliance & Services

Session at a glanceSummary, keypoints, and speakers overview

Summary

The symposium, convened by the Income Tax Department, focused on how artificial intelligence can improve enforcement, compliance and public services in India [5][8]. Chairman Ravi Agarwal highlighted that the upcoming Income Tax Act 2025 will create a technology-driven ecosystem that reduces interpretative ambiguity and supports AI-based algorithms for enforcement [32-33]. He noted that early AI pilots have already generated significant revenue, with targeted nudges prompting 1.11 crore taxpayers to file updated returns and yielding over ₹8,800 crore, while prompts on foreign assets led to disclosures worth ₹99,000 crore [70-71].


Abhishek Kumar described Project Insight 2.0’s aim to provide taxpayers quick, accurate information, enhance NERJ campaigns, and use large-language-model tagging to assess litigation risk and predict case vulnerability [88][92-96]. Ramesh Revuru introduced LTI’s Blueverse platform and its Indianized version Bharatverse, which offers pre-built multi-agent layers and guarantees deterministic “right-action” for the CBDT, supported by eight AGI-related patents [106][116-128]. T. Srinivasan explained the development of a sovereign small-language-model (SLM) fine-tuned via LoRA, integrated with a secure ontology and vector database to deliver multilingual, context-aware chatbots and automated compliance assistance [129-138][141-148][152-158].


Professor Mausam broadened the discussion to law-enforcement AI, citing applications such as CCTV-based crime reduction, satellite imagery for maritime monitoring, multimodal analytics for anomaly detection, and warning of bias and the need for human-in-the-loop oversight [226-233][247-254][291-304]. Martin Wilcox emphasized that graph analytics at national scale require in-database processing and that Teradata’s “Bring Your Own Model” approach can accelerate inference up to 25-fold, enabling real-time risk scoring for financial data [318-324][329-347].


Suvendu Pati outlined RBI’s AI governance framework of seven sutras and six pillars, and described the MuleHunter.ai tool that analyses hundreds of features across banks to detect mule accounts with accuracy often exceeding 80-90 % [369-376][398-410]. Ram Ganesh demonstrated a police “co-pilot” that ingests FIRs, generates compliant investigation paths, and leverages large-language models, graph neural networks and agentic AI to automate case handling while keeping analysts in the loop [464-470][491-496]. Avneesh Pandey presented SEBI’s AI tools-RIDAR for ad compliance, Sudarshan for multimodal fraud detection, and Infomerge for integrated investigation reporting-showcasing how AI enhances regulatory oversight and cybersecurity [524-536][538-545].


Shashi Bhushan Shukla traced the Income Tax Department’s 25-year digital evolution, highlighting the Nudge initiative that prompted 1.57 lakh taxpayers to disclose foreign assets worth ₹99,000 crore and generated ₹6,540 crore additional tax, illustrating AI-driven proactive compliance [555-562][580-583]. He also announced a collaborative international effort to address AI-enabled financial crime, including synthetic identity and deep-fake threats, and outlined plans for real-time, AI-guided taxpayer assistance at filing [600-612]. Justice Mahadevan concluded that AI has moved from aspirational to operational across tax administration, law enforcement and regulatory bodies, reinforcing risk intelligence, service delivery and ethical safeguards [617-645].


Keypoints


Major discussion points


Strategic vision for AI in tax administration – The Chairman highlighted that the new Income-Tax Act 2025 will create a “technology-driven ecosystem” and that AI is central to reducing interpretation ambiguity, litigation and enhancing trust-based voluntary compliance [32-34]. He cited concrete early results: targeted nudges prompted 1.11 crore taxpayers to file updated returns, generating ₹8,800 crore in revenue, and AI-driven foreign-asset disclosures added ₹99,000 crore of assets and ₹6,500 crore of income [70-71].


Industry and academia solutions for AI-enabled compliance – Project Insight 2.0 was presented as an end-to-end AI platform that will provide quick information, improve NERJ campaigns, and enable litigation-risk assessment through large-language-model tagging [88-97]. LTI’s “Blueverse”/“Bharatverse” multi-agent platform was introduced to deliver deterministic “right-action” AI for the CBDT, including a sovereign small-language-model (SLM) built via LoRA adaptation and a layered architecture (foundational models, data, knowledge, orchestration, consumption) [106-118][124-128]. T. Srinivasan detailed the technical stack: a sovereign LLM fine-tuned with LoRA, vector-DB retrieval, quantisation, and ontology-driven legal-intelligence chatbots for taxpayer assistance [129-154][160-169][188-194].


Academic perspective on AI for law-enforcement and ethical safeguards – Professor Mausam outlined the breadth of AI use-cases (preventive, predictive, investigative) across visual, textual, speech and structured data, citing examples such as CCTV-based crime reduction, satellite-imagery for maritime surveillance, and multimodal fraud detection [210-218][224-236][242-254][260-270]. He warned of pitfalls: over-triggering, algorithmic bias, loss of trust, and stressed a “human-in-the-loop” approach, robust risk-assessment and privacy safeguards [291-304].


Regulatory agencies deploying AI in practice – The RBI presented its AI-governance framework (seven “sutras” and six pillars) now adopted by the Government of India, and demonstrated the “Mule Hunter” AI system that flags suspicious bank accounts with >90 % accuracy, reducing false positives and enabling real-time transaction scoring [369-380][394-410][418-429][436-440]. SEBI described a suite of AI tools-RIDAR for ad-compliance, Sudarshan for multimodal fraud detection, Infomerge for investigation workflow, and a cyber-resilience engine that autonomously reads audit submissions-emphasising democratized development and continuous monitoring [524-538][540-549].


CBDT’s implementation outcomes and future roadmap – Shukla recapped the department’s 25-year digital evolution, the “Nudge” campaigns that have driven 1.57 lakh taxpayers to disclose ₹99,000 crore of foreign assets (yielding ₹6,540 crore extra tax) and reduced bogus donation claims by ₹9,879 crore [560-571][580-588]. He outlined a seven-step “Saksham” strategy (data collection, analysis, communication, assistance, empowerment) and future AI aims: real-time cross-validation, proactive prompts at filing, and a 360° compliance ecosystem [600-609].


Overall purpose / goal


The symposium was convened to examine how artificial intelligence can be operationalised across the Income-Tax Department and allied regulatory bodies to make compliance easier, enforcement more precise, and governance more trustworthy-moving from aspirational AI to concrete, scalable solutions that benefit both the state and taxpayers.


Overall tone and its evolution


– The opening remarks were formal, optimistic and visionary, stressing a “paradigm shift” toward a tech-driven tax ecosystem [32-35].


– Subsequent industry and academic presentations adopted a technical, solution-focused tone, showcasing prototypes, architectures and early performance metrics.


– The academic contribution introduced a more cautionary tone, highlighting ethical risks, bias, and the need for human oversight.


– Regulatory speakers blended confidence in deployed AI systems with pragmatic notes on governance and continuous improvement.


– The closing segment returned to a celebratory, appreciative tone, emphasizing achievements, collaborative spirit, and a forward-looking agenda [617-644].


Overall, the discussion progressed from high-level policy vision, through detailed technical demonstrations, to critical reflections on ethics and finally to a collective affirmation of progress and future commitment.


Speakers

Amandeep Dhanoa


– Role/Title: Indian Revenue Service Officer (2018 batch), Moderator of the symposium


– Area of Expertise: Public administration, AI-driven tax enforcement and compliance


– Sources: [S1][S2]


Abhishek Kumar


– Role/Title: Commissioner of Income Tax, Project Insight 2.0 lead


– Area of Expertise: AI-enabled taxpayer services, digital tax ecosystem


Shri Ravi Agrawal


– Role/Title: Chairman, Central Board of Direct Taxes (CBDT); Chief Executive Officer of the Department of Income Taxes


– Area of Expertise: Tax administration, AI governance in public finance


– Sources: [S6]


Avneesh Pandey


– Role/Title: Executive Director, SEBI; National voice on technology strategy and cybersecurity governance


– Area of Expertise: AI applications in securities regulation, cybersecurity compliance


– Sources: [S8]


Justice R. Mahadevan


– Role/Title: Joint Commissioner of Income Tax (also addressed as Justice)


– Area of Expertise: Tax law, AI-enabled governance and compliance


Ramesh Revuru


– Role/Title: Global Head of Engineering, LTI Mindtree


– Area of Expertise: Enterprise AI platforms (Blueverse/Bharatverse), agentic AI solutions


Suvendu Pati


– Role/Title: Chief General Manager & Head of FinTech, Reserve Bank of India


– Area of Expertise: AI-driven financial crime detection (Mule Hunter), fintech AI frameworks


– Sources: [S15]


Ram Ganesh


– Role/Title: Founder, CyberEye; Cyber-security expert


– Area of Expertise: AI-assisted cyber-crime investigation, forensic analytics


Shashi Bhushan Shukla


– Role/Title: Principal Commissioner, CBDT; Architect of Data Analytics Cell and Saksham Nudge Initiative


– Area of Expertise: AI for tax compliance, data analytics, behavioral nudges


Martin Wilcox


– Role/Title: Senior Vice President, Teradata; Global leader in AI-driven data analytics


– Area of Expertise: Graph analytics, multimodal AI, large-scale risk analytics


T. Srinivasan


– Role/Title: Technology Lead, LTI Mindtree


– Area of Expertise: Sovereign large language models, AI model adaptation (LoRA), tax-domain AI solutions


Professor Mausam


– Role/Title: Founding Head, YALI School of AI, India University


– Area of Expertise: AI research, law-enforcement AI applications, responsible AI


Additional speakers:


Harsha Poddar


– Role/Title: Indian Police Service (IPS) officer, award-winning innovator in AI-driven policing


– Area of Expertise: AI for police investigations, automated workflow generation


Shri Shankar Jaiswal (mentioned as source of feedback) – no speaking role recorded.


Sunny Manchanda (mentioned as source of feedback) – no speaking role recorded.


Full session reportComprehensive analysis and detailed insights

Opening and Context


Amandeep Dhanoa opened the symposium by welcoming the audience and emphasizing that artificial intelligence is reshaping every domain of governance. He highlighted the Income-Tax Department’s initiative to bring together distinguished speakers to explore how AI can simplify compliance, reduce disputes, and foster trust-based governance [1-4][5-8].


Chairman Ravi Agrawal’s Address


Chairman Ravi Agrawal (also spelled Agarwal) outlined the forthcoming Income-Tax Act 2025, which will become effective on 1 April 2026 and will create a rule-based, technology-driven tax administration. He positioned AI as a catalyst for “trust-based voluntary compliance”, “enhanced service delivery”, and “ethical, accountable governance” [9-15][16-22]. He also cited concrete outcomes already achieved through AI: 1.11 crore taxpayers prompted to file updated returns, a revenue impact of ₹8,800 crore, disclosure of ₹99,000 crore in foreign assets and ₹6,500 crore in foreign income [23-27].


Session Structure and Group Photo


After the opening remarks, Dhanoa announced the two thematic categories-Industry & Academia and Regulatory Bodies-called the first set of speakers to the stage, and arranged a group photograph of all participants [31-34][23-26].


Industry & Academia Session

Abhishek Kumar (Commissioner, Income Tax – Insights) – Presented an end-to-end AI-enabled taxpayer service framework, described NERJ (Non-Electronic Return-Junction) campaigns, explained litigation-risk assessment using AI, and demonstrated how large language models tag and predict case vulnerability [35-38].


Ramesh Revuru (Global Head of Engineering, LTI Mindtree) – Introduced “Bharatverse”, the Indianised version of the “Blueverse” agentic platform, and detailed its five-layer architecture (foundational models, LLMs, data, knowledge, orchestration, consumption). He stressed the need for deterministic “right-action” in tax enforcement [39-44].


T. Srinivasan (Technology Lead, LTI Mindtree) – Described the design of a sovereign LLM (“SLM”) for the tax domain, including LoRA fine-tuning, RAD-plus vector database, quantisation, and an ontology-driven, multilingual, multimodal AI stack for risk scoring, anomaly detection, and conversational assistants [45-55].


Professor Mausam (YALI School of AI, India University) – Offered a broader law-enforcement perspective, covering AI use-cases in crime prevention, prediction, investigation, and post-crime analysis. He highlighted multimodal data sources (visual, textual, speech), examples such as CCTV-based crime reduction and satellite-imagery-driven maritime surveillance, and stressed human-in-the-loop oversight, bias mitigation, and privacy safeguards [56-71].


AI-Driven Risk Analytics

Martin Wilcox (Teradata) discussed the challenges of graph analytics at India’s scale and the necessity of in-warehouse AI. He introduced “multimodal” AI that processes images, audio, and text, and the “Bring-Your-Own-Model” capability. Two case studies were presented: a Brazil credit-union income-estimation model achieving 25× faster inference, and an Asian bank’s Net-Promoter-Score analytics using 50 k weekly chat logs [72-80].


Regulatory & Enforcement Session

Suvendu Pati (RBI, Head of FinTech) – Outlined the RBI’s AI-governance “seven sutras” and six pillars, described the AI sandbox, and showcased “MuleHunter.ai” (857 features, >90 % accuracy in some banks, real-time transaction scoring, cross-bank aggregation) [81-95].


Ram Ganesh (CyberEye) – Explained the “co-pilot” system that ingests FIRs, generates compliant investigation paths, automates legal requests, and leverages telecom and open-source intelligence. The solution employs graph-neural-networks, LLMs, agentic AI, and big-data analytics for cyber-crime investigations [96-104].


Avneesh Pandey (SEBI) – Presented four AI tools: RIDAR (advertisement compliance for mutual funds), Sudarshan (multimodal fraud detection), Infomerge (investigation data integration and report generation), and a cyber-resilience framework that reads compliance submissions and flags gaps. He emphasized the democratization of AI development within SEBI [105-112].


Shashi Bhushan Shukla (Principal Commissioner, CBDT) – Recapped the department’s 25-year digital evolution, the “Nudge” initiative (seven-step “Saksham” strategy), and quantitative outcomes: 1.57 lakh taxpayers disclosed ₹99,000 crore foreign assets and ₹1,758 crore tax was recovered from bogus deductions. He outlined future AI plans for real-time, pre-filing prompts and 360° taxpayer assistance [113-130].


Closing Remarks


Justice R. Mahadevan delivered the vote of thanks, confirming that AI is now operational across tax administration and law-enforcement. He summarized each session’s contributions-Insight 2.0, Professor Mausam’s roadmap, Teradata’s risk analytics, AI-enabled policing, Nudge outcomes, MuleHunter, and SEBI’s tools-thanked the Chairman, moderators, and organizing teams, and formally closed the symposium [131-138].


Session transcriptComplete transcript of the session
Amandeep Dhanoa

Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. Thank you. Dear guests, colleagues, and esteemed speakers, namaskar. I, Amandeep Dhanua, Indian Revenue Service Officer of 2018 batch, welcome you all to this symposium by the Income Tax Department on AI -driven enforcement for better governance through effective compliance and services. Artificial intelligence today is reshaping every domain of governance. And when it comes to public services, the state… are uniquely high. Understanding these stakes, the Income Tax Department has called upon distinguished speakers from the industry, the academia, and regulatory bodies to delve on the most pertinent question of the hour, that is, how can artificial intelligence enable easier compliance, lower disputes, and strengthen trust -based governance?

Today’s sessions are structured deliberately into two categories, Category 1 of Industry and Academia and Category 2 of Regulatory Bodies. With that, I would like to introduce Honourable Chairman, Central Board of Direct Taxes, Shri Ravi Agarwal. Sir is a Distinguished Indian Revenue Service Officer of 1988 batch who brings over three decades of experience in the field of Income Tax Department. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes. He is the Chief Executive Officer of the Department of Income Taxes.

across multiple verticals of the Income Tax Department. He has played a pivotal role in key phases of the department’s digital transformation, including the establishment of Central Processing Centre. Known for his strong digital mindset and technocratic approach, he has consistently encouraged the use of data and technology to strengthen administration, administration, enhance compliance and translate data into revenue through prudent approach. Now, I request Principal Chief Commissioner of Income Tax, Delhi, Shirdi Anand Jha Sir, to kindly welcome Honourable Chairman Sir with a plant. Thank you, sir. I request all the speakers to kindly come on to this side of the stage so that we may have a group photo I request chairman sir as well as the member madams to join for the group photo all the speakers from category 1 and category 2 please join us for a group photo thank you Thank you, madams and sirs.

I request the speakers from Category 1 to kindly take their place on the stage, please. I request Abhishek Kumar, sir, Ramesh Ravuru, sir, T. Srinivasan, Professor Mossam and Shri Martin Wilcox to take the seats, please. Now I request Honourable Chairman, sir, to kindly set the tone for this symposium by his opening remarks.

Shri Ravi Agrawal

Good evening, ladies and gentlemen. Good evening, gentlemen. Well, I’m delighted to welcome you all to today’s symposium, which is under the aegis of AI Impact Summit, Hitae Sarvajan Sukhai, Welfare for All, Happiness for All, which speaks the theme. In fact, it’s a very powerful theme. How do you use AI for welfare of all and happiness for all? That’s the basic intent. And within it, the sessions today would be AI driven enforcement in the summit. And it is a privilege to join a conversation that brings together policymakers, technologists, enforcement agencies, and academia on a subject that will shape the future of governance. Income Tax Administration is at a critical inflection point. especially with the enactment of the new income tax act 2025 along with the corresponding rules forms procedures which would be effective from 1st of April 26 it represents a paradigm shift in the philosophy and procedures practices of the direct tax administration in India and what makes it different is that going forward it is going to be a technology driven ecosystem that would be put in place and that’s how and that’s why the role of AI becomes so important and this gathering today becomes all the more relevant now the new income tax act while simplifying the language and procedures reduces interpretation ambiguity and brings tax certainty and as I mentioned since the beginning of the year it is going to be more rule driven, technology driven ecosystem.

The changes in the act, the language in the act would help in also putting in place the algorithms which through AI going forward would reduce and minimize the scope for different interpretations. The positive environment created by the Income Tax Act 2025 which is reflected from the feedback that we have received from the stakeholders and the prudent approach of tax administration that we are providing since a few years, since last about few years, provides a robust foundation for sustaining and advancing future reform measures to reduce litigation, enhance tax certainty and trust based voluntary compliance. AI has the potential to transform every sector by amplifying human capability, by turning vast data into insights, automating mundane and routine work, and enabling faster, smarter decisions at scale.

For law enforcement, this means we can strengthen how we prevent, detect, and respond, but only if we build the right preparedness and capacity building through high -quality shareable data, secure systems, clear accountability, strong safeguards, and continuous training. And here, the basic theme anchored by the Honorable Prime Minister in a manner of vision becomes so important. Because ultimately, what does Manav reflect? Moral and ethical systems, accountable governance, national sovereignty, and the right to justice. accessible and inclusive AI and valid and legitimate systems. So what do these words reflect ultimately? These reflect that while we have in AI a very powerful tool, but at the same time we need to be conscious about it as to how do we put in place and apply AI in our overall governance, overall welfare of people, happiness of people while being ethical.

Also conscious of the fact that if not applied with the responsibility, the results can be, you see, different. So we intend to adopt AI to support enforcement with clear accountability, build on secure and sovereign data foundations, ensure access to phased adoption and continuous training and validate systems for fairness and lawful… use. We have AI and it’s faster, you see, developing. Within the income tax department, even across, what we need to see is how do we build our capacity, our resources. Because here is a solution. You have some AI tools, solutions. But for that, you need to drive. The human has to drive the AI rather than AI driving the human. And for that to happen, you have to build that capacity in your resources, in the human resources.

We need to be conscious of the fact that, okay, what are the pluses and the pitfalls. In adopting AI, we have to be conscious about it when we are adopting AI. One of the features, I would just like to share one experience that I had just yesterday. So I was told that through AI, you can actually develop some codes. I didn’t know about that. So yesterday I asked my son, well, how is it possible? And I was proposing to develop an app for our training purposes. So he told me that, OK, this is how we can go about it. This is the open source and so on, so forth. And I put in place some sort of framework for the technology for this training module.

Spent about five, six hours in the night. And what was interesting was within five, six hours, one one actually was able to get a reasonably robust and matured code and a full application, which broadly takes care of the requirements of capturing training in the department. OK. Now, why I mentioned. example well otherwise but for this I would say facility a development of this code would have taken months but then with a you see with spending five to six hours one was able to come up with some code even if I say it is elementary it is basic you have a platform on which you can build on so that is the power of AI but can I blindly actually rely on it the answer is no I have to apply myself and see to it that okay you already have this platform how do I build it up and that is the potential of AI that it actually would help us to not do routine and mundane work it would translate our effort from a routine work to a enhanced work and that is where our capacity and our matured would lie.

So this is an opportunity for us in the tax department because we are all here in that context but also as individuals that we leverage on the power of AI but we leverage it being conscious of the fact that we have to drive it rather than AI driving us. So our approach needs to be practical with use of proven applications for data integration risk and priority scoring, anomaly detection, language support and workflow automation with constant testing and learning so we stay aligned with AI advancements and do not fall behind. This is also very important because things are developing and when we talk about Developed India 2047 how do we actually keep pace with it? So you have to actually align yourself with the developments that are taking place.

And each of the organizations, be it in the government or outside, have to align together so that together as a nation we grow and you put these opportunities to practice and provide to our taxpayers and stakeholders, you see, the best of class ecosystem and facilities. Over the past two financial years, we have applied AI in the department, though to a limited extent, but then it has yielded results. As you would all be aware, targeted nudges have led to 1 .11 crore taxpayers filing updated returns with a revenue impact of more than 8 ,800 crores. And if you talk about foreign assets, then the foreign assets, it’s worth about 99 ,000 crores. and foreign income of about 6 ,500 crores has also been declared by the taxpayers on the basis of the prompts that have been given by the tax department.

So we are moving from intent to action. We are scaling AI -based risk assessment, strengthening digital forensics and analytics, and building AI support for taxpayer services. To make compliance easier and enforcement more precise. The discussions at the summit will help us refine our approaches, set clear government standards, and scale that works to improve speed, consistency, and fairness in enforcement. I wish all the best, and I am sure that the deliberations here would be really useful and enriching. Thank you.

Amandeep Dhanoa

Thank you. Thank you, sir, for setting the tone and direction so clearly. Now, as we begin with category one, we turn to industry and academia, the two ecosystems that are shaping the intellectual and technological foundations of the artificial intelligence. While government defines purpose and safeguards, it is the industry that builds scalable systems and academia that pushes the frontiers of responsible and explainable AI. This segment will help us to understand not only what is technologically possible, but also what is practical, scalable and sustainable for public administration and law enforcement. Now we move to session one, project insight 2 .0. Where AI enabled compliance and taxpayer services are being operationalized at scale. I call upon Shri Abhishek Kumar, sir, Commissioner of Income Tax.

insights who has been instrumental in shaping the income tax department’s digital ecosystem through project insight and other initiatives. Joining him are Shri Ramesh Revuru, Global Head of Engineering at LTI Mindtree and Shri Shri Nivasan T, Technology Lead at LTI Mindtree, bringing three decades of enterprise technology leadership. May I invite all the three speakers to take us to the next phase of AI -enabled compliance. I request the speakers to be mindful of

Abhishek Kumar

Now, coming to last step, how does it help taxpayers in end -to -end life cycle? So, first key step is quick availability of accurate information to the taxpayers. We already discussed it as part of AI, yes, it will be enabled. Next, our NERJ campaigns will become more effective through infusion of AI. A very small fraction of cases where it leads to litigation, we will be able to do litigation risk assessment through AI infusion. With the advent of LLM, it is possible to tag issues in the assessment orders, appellate orders, judicial orders. So, we will be able to tag issues. We will be able to… We will be able to link judicial orders. and as a next step we will be able to even predict vulnerability of the case and ultimately it will result in retraction in litigation.

So all these business objectives we seek to achieve through Insight 2 .0 especially through infusion of AI. These are the business objectives and how they will be achieved, what is the technology proposed and how technical implementation will take place. That will be explained by Mr. Ramesh from LTM. Thank you.

Ramesh Revuru

Ma ‘am I got the message. I’ll make Maggie and finish in two minutes. Thank you very much sir. Thanks for the opportunity to be here in the August presence of all the income tax officials. I want to leave you with three key messages. The first and foremost is the launch of Bharat Varsh. Thank you, ma ‘am. Bharat Varsh. Second, I’ll talk about the importance of right action. And the last part of it is general intelligence in the context of CBDT. So, the first one, we at LTM, IIT, LTM now have this product offering called Blueverse. Blueverse is the agentic platform on which you can build your agents. Sir, Chairman Sir spoke about how he was able to build these agents without writing code.

And then think of it as this is the platform on which you can build all the five layers that are required for any multi -agent. The five layers are the foundational models, the LLMs, the data layer, the knowledge layer, the orchestration layer. and the consumption layer on the top. All these layers are pre -built and hence the ability of CBD to build their multi -agentic system faster is what we bring and this is what we have implemented for our global customers. What we are launching is the Indianized version of Blueverse, what we are calling Bharatverse and hence purpose built for CBDT. Why is right action important? Right action, as you might know, generative AI is probabilistic in nature.

It is going to guess the next word or generate the next word or the next pixel, the next frame in the video. But in the context of CBDT, you cannot have something which is probabilistic. You need to move the needle to become more deterministic. And hence our ability to guarantee that right action in every condition, scenario, criteria is what right action is all about. I was in the morning listening to Demis Hassabis, who is the Google deep mind, and he said AGI probably is five years away. While AGI, which is general intelligence, human -like intelligence, is five years away. What you need is general intelligence of the CBDT. And hence, right data, right context leading to that right action.

We have filed eight patents on creating this AGI, and AGI will be bundled with our Bharatverse that will get implemented for CBDT. I’ll ask Srinivasan in the interest of time to take us through the technical architecture, but a big thank you for the opportunity.

T. Srinivasan

Thank you, Abhishek sir, and thank you, Demis. So. Quickly, like, you know, I’ll start it very fast. So the most important thing is that. you know while he spoke about right action everything how do I do it so everybody talks about LLM it’s not about deploying a simple LLM like what we are doing here what we are doing is we are building something called as your SLMs which is just long small language model along with the regular LLM which will be using for the particular system as such okay so if you see this the purpose of this SLM is going to be it is going to be very very much income tax based it is going to be for the ITD officials and it is for the going to be for the CBDT and we are going to ingest them with data which is going to be your income tax loss your information which is very closely related to the environment which means that there’s going to be a data control you’re going to have a quality vetted data everything is going to be within the system it is going to be secure nothing is going to go outside at all it is going to be what I call as a sovereign LLM for this system as such okay so when you look at it how am I going to to do that.

So I cannot be retraining the entire LLM fully. It’s not cost effective. So we use the concept called as LORA, which is your, LORA is nothing but your low ranking adaptation, where you can just spend, you can do it at 1 to 2 % of the training, overall training or cost, we can do that. So what it does is that it takes all the data. It does not, it does nothing for us. Every LLM has got a deep package. It removes that. It does just the matrices, add the matrices which is related to this particular data and it starts training on that. So what happens is that you get the proper details. Now what do I do is that still I need to clean it up.

So I use RAD plus vector DB to make sure that all the details which is there related to the retrieval and source citation which you are going to get, that is going to be given to you. Then I am going to distill it. Imagine this as a teacher, this as a student. it will have the lower version of it or specialized version, I will call it Generate and Masters. So for certain set of things you will use this. And like now quantization basically improves because there is going to be nation scale. So I want to make sure that we are very effective and efficient. So we will be using Indate. And the last and the most important thing is your ontology.

What you are going to do is that I am going to look at your structure, data structure. Then we are going to look at the sections, the precedents, then entire entities and the compliance rules. Everything is going to go inside this model. So completely we are building it for you. What actually happens is that because of this you will be able to summarize, you will be able to have multi language capability. Then you also have the, it is not your typical generic NLM. It is going to be a very focused area where it is going to do for any kind of task which has got legal intelligence, legal interpretation intelligence. And it will work on that.

So advantage is that, sales code, context and analytics, and the other things that we are going to have to do is that we are going to have a multi language capability. Directly being able to use it. Let me take next two minutes. What am I going to do? So how am I going to do this? So I am just going to take you two journeys. We are going to do 25 or 30 of them, but I don’t know. I will just have two of them for you, to show you a sample. So the first thing is about your AI. The first time, the most important thing is that we do not want to frustrate the people because they are getting the data from external sources.

We are getting external sources data, and we end up, if there is issues in that, it’s a problem. So what we are doing is that we validate them at the data source level. So we are going to have a proper agentic AI, which does that. Then, the grievances. When you want to talk about it, currently it is going to be FAQs probably, and it is going to be a bunch of, you know, chat, probably, you know, deterministic chat is the word I use. Now, I am going to make it more, what I call as truly context -amid and intent -driven chatbots, or conversational AI, which will kind of enable them, it will take them through the journey for them, to say how they should do it.

Now the last, not the least, like you know even then, pre -filled data is very powerful. But if that data is not proper, get into trouble. So what we are doing is that, there also we are putting in the internet. So internet is continuously there. Now this is going to reduce the overall submission by the taxpayer as such, okay. So and next what we do is that we have a proper, we are continuing the intelligence here. So we do a verification. So we will be able to auto detect and match the discrepancies. We are able to match the data saying that where they have gone wrong. Rather than telling them what is done right or wrong, we will help them to do that.

Then once that is done, this will actually go through that. And not everybody is intentionally doing it. I am sorry I am not using the mic. Please, I think I am loud enough to talk to everybody. Everybody is able to hear me. So I do not, I have a nutshell. So this agent, primarily what it does is that, it makes sure that compliance is intentional. We make it, the template is also very very human, human based. Like human based and makes sure that you have it. So here I am going to show you how to do it. So here I am going to show you how to do it. So here I am going to show you how to do it.

So here I am going to show you how to do it. the end the last not the least we have people who still will do it we have problems so what we do is that we identify agents we look at the cases every other detail and make sure that the vulnerability is predicted so it is going for this the entire thing if you see that the flm which what is being created is going to be your primary what i call the input for you to do this okay let me go to the next one this is going to be general though i put it as the air so when i talk about conversational assistant it is for anybody and everybody most importantly for anyone to look at the portals understand where it is and then go behind them is a problem so what am i going to do is that i’m going to have a context aware intelligent uh domain aware uh nlp chatbot which understands and explains them what i call the idiot proof which is like a common man he should not get worried about the legal jargons rather tell him how or what he should do step by step that is one of the primary focus for me and we are going to use certain set of LAMA, we are going to use the SLMs and also the SLM which is your inbuilt SLM which is going to be built across.

So overall if you really ask me, Insight 2 .0 will be moving away from, it is more about enabling intelligence to the both to the officers and to the citizens and making them more happy and you know as

Amandeep Dhanoa

Thank you sirs. We are already running behind by 20 minutes so I request the further speakers to kindly speed up. Now Professor Mohsen, founding head of YALI school of AI at India University of India. Professor Mohsen, founding head of YALI school of AI at India University of India. Thank you. and one of India’s foremost AI thought leaders will share perspectives on the possible usage of AI by law enforcement agencies and the road ahead. Sir, the floor is yours.

Professor Mausam

Thank you for the kind introduction. I was asked to speak on the usage of AI by law enforcement agencies in general and not just on income tax. So I will talk about a little bit more general perspective. I should say that I have been fortunate to be involved with some of the earlier activities in the income tax department and I personally feel that the kind of support that the income tax department in India gives to our users is much better than the support that the US gives their users. I have seen that because I have filed taxes in both countries and I feel that there is no equivalent of the 26AS form that the US gives to our users.

There is just so much support that we have here. we can check how the work is going forward. Also, I am not a law enforcement expert, so I got some feedback from Shankar Jaiswal, who is DGP Lakshadip and Sunny Manchanda, who is director of DRDO Young Scientist Lab. So thank you to them. So to me, this is the context. Per 100 ,000 people, the number of police officers in India are much less than in developed countries like, and I will count China now as a developed country, in China, US and Germany. We are at 155, they are at 200, 300 and so on. For judges per million people, it is recommended that we should have 50. We have about 15 to 22, depending upon which news article you read.

There are about 29 % police cases which are still pending investigation and about 4 .85 court cases pending for over one year. With that kind of a sentiment, and I don’t know how it is for the income tax, but we are always in use of high expertise. And therefore, the need to use AI in India, Because if we need to deal with this setup that we are in, we need to somehow augment ourselves with technology. Now, of course, you can use AI in various ways and the aspects to think about it. Are you using in law enforcement before a crime is committed? Are you using it to predict crime? Are you using it to figure out what we should do to stop crime?

Or when the crime has been committed, are we thinking about how to investigate crime and how to make judgments for it? And depending upon where we are, all of these places AI can be used. Similarly, AI can be used not only by income tax and GST, but also by military, by maritime, by traffic police, by other police, and so on and so forth. And also, what kind of data are we getting in? So for most of this conversation, we are going to be talking about financial data, which is structured data. But there’s also visual data, language data, speech data. And bringing it together adds up to a lot of information. to intelligence so you can now take one from the first column one from the second column and one for the third column and actually create new AI use cases so for example we can do much better job of monitoring in traffic police if we somehow use the visual data in some ways and so on so forth so you can actually start thinking about really really interesting possibilities here and in the next few slides I’m just going to show you very basic some examples so for example in the case of image of video in 2014 we were very proud that surat used to say that 27 percent crime has been reduced just because there were CCTVs and there was face recognition and if three and three to four people in a known database came together police would go there and it would reduce the crime somehow we haven’t seen that happening in India elsewhere other than so that I don’t know why this is really old so we are really poised to do this we should really have CCTVs everywhere so that we can do a much better job of crime surveillance.

DRDO lab is doing a very interesting job of obfuscation. If you are wearing a mask, if you have an interesting, you know, hairline has changed, can we figure out who you are? Of course, visual intelligence for traffic should be very easy. We still see people coming in the opposite side of the road or not using helmet. That can be absolutely automated with very simple imagery. A satellite imagery analysis is also very interesting. For example, the only way we know that China has a new port in Djibouti is because of the satellite images. It’s very easy to actually, I mean, it’s not easy to analyze, but we can analyze the data is there. When did we say the data is there for all the world so easily accessible to us?

Well, today it is. And for income tax, by the way, you can also start thinking about where a person is living and what kind of locality it is and how does it light up in the night. And that tells you the affluence level and you can start using that kind of information. This was a very interesting case where U .S. Marine Coast Guard, a Marine ship carrier. A flight carrier was being chased by 20 Iranian vessels in the ocean. And we knew because the satellites can see. Same for maritime surveillance. We have so many use cases of AI today, such as DigiYatra. We can use face recognition for searching of the missing persons. But we can start thinking about anomalous behavior, like anomalous vehicular behavior.

One of the very famous car rapes that happened in Delhi happened that with one car was just going on the road, taking U -turn, going on the road, taking U -turn, going on the road. These kinds of very anomalous behavior should have easily been detected if we were doing this. Even today in taxi safety, I know that women are still worried about taking an Uber late at night in Delhi, even though we have a panic button because we don’t know whether the panic button really works or it doesn’t work. But if there’s any anomalous behavior of the taxi driver, there should be. There’s a very clear mechanism to say that the driver is not following what they’re supposed to be following.

And it should be very easy to prevent. such crime. Same for taxi, bus, train safety monitoring. We go on to textual intelligence. A lot of people are interested in anti -terrorism and so on and so forth. Can we easily and quickly answer questions? For example, if I if we just kill Osama bin Laden and we gave you the laptop of Osama bin Laden, how much time would it take us to actually go through all his documents? I hope not much because AI should be there to help you. For example, I was working on this long time back and at the time we could figure out who are the entities active in Iraq. This is what my system gave 15 years ago that these are all the players who were active in Iraq at the time and if you want to know something about one particular player, you could just say what do we know about this person and it will give you a quick answer, a quick summary.

This kind of intelligence now we have at our fingertips and it should be used for figuring out who are the bad actors whether in the income tax or in other kind of law enforcement. I can move on. Speech to text for quick FIR filing, a chatbot to support … We just heard about Project Insight 2 where there will be a chatbot, but we also need a chatbot for our own income tax department people because they need to somehow deal and work with the data. And if they have to be writing code every time, it would be very hard, as Mr. Agarwal said earlier, AI can write code now. So we should have text -to -coding systems for our own IT department so that they can easily get to the data that they are looking for and that makes it convenient for them to find the right information.

On the more financial intelligence side, the input here would be more structured data, for example. So there are so many interesting news stories that are coming out, which makes me very proud that we have a very, very active department. For example, it is just like yesterday or day before yesterday that 60 terabytes of billing data and 1 .77 lakh restaurant IDs were uncovered and 70 ,000 crore tax evasion scam was uncovered. This was just using the information and some crunching by the data by AI that these people were deleting a lot of invoices. And once the system recognized that there was a pattern going on, you figured out that this is an anomalous pattern. And then an income tax person, an analyst can actually look at it and figure out what is going on.

Similar things have happened where large or frequent bank deposits can flag mismatches, ITR filing, AI tracking suspicious tax claims. So it seems it is very clear our citizens know that there is AI that is looking at them, but there are still scams going on. And if you can detect anomalous behavior, this is not the expected behavior. We might be able to find even new scams that we couldn’t guess ahead of time. And if we start to put this information together, it becomes even more interesting. For example, mule accounts. I have been told that college students have mule accounts. But if we know from their Facebook. sorry, I’m too old, Insta pages or whatever the more recent social media is, that they are just college students, but their bank accounts are going through a lot of churn, it would be very easy for us to predict that maybe it’s a mule account.

Similar, other cases of tax evasions and money laundering could come together if multiple sources of data from the social media feed, from the financial document, from the employment feed, from investment, from the various kinds of purchases they do. Sometimes these will be invoices. If these are paper invoices, maybe an OCR would be needed. So there will be a vision requirement. There are also interesting collusion rings. Generally, we have found that people who are bad actors, they support each other in the bad acting. And so if you create a graph around it and start looking at collusion rings, we might be able to find these people better. I think the sky is the limit. There’s another one very interesting phenomenon that where crime happens more, that is where we deploy more people.

But once we do that, people who are doing the crime figure out. that oh we have more people here they go elsewhere this is a game this is a game between attacker and defender and if the attacker knows what the defender is doing it is very easy for attacker to change the the place or whatever it is that they are doing their style and continue their game so it is important to not just go where the crime is but go where the crime might be in the future once they get to know that the defenders are going there yeah i’m gonna finish in two minutes so people have studied these security games for example in elephant poaching scenarios in coastal patrol scenarios and we have found much much better performance because of that let me take the last minute and say what are the challenges in the use of ai in these scenarios because there’s a lot of opportunities it’s a lot of exciting a lot of a lot of excitement but we have to be careful first of all we cannot make this autonomous.

If AI starts to reach out to citizens directly and starts to make mistakes, it will create a lot of problem. People will be unhappy. People will be worried. We will lose the trust in the system. It is important that our intelligence analysts and AI, they work together. AI brings up the issues. AI maybe generates a lead, but the lead is processed by the human to maintain the trust in the system. Now, that is where risk assessment becomes very important. Over -triggering is also a problem because if you make lots of alerts, then people become immune to the alerts and we have to figure out how we can do this in a trustful fashion. If you don’t do this right, you get into the bias of the algorithms and it has been shown that earlier when in judicial settings AI was tried, it made mistakes in favor of white people and opposed to African -American people.

We are a very diverse society. with so many castes and social strata. If we got those biases in our models, that will be very, very devastating. So we have to be very mindful that the AI bias doesn’t creep in and we have a human in the loop. Also, it is important that we gather data in a centralized repository. We are used to a system where one hand of the government doesn’t talk to the other hand of the government. Project Insight is trying to fix that. I’m so happy. Other scenarios are also trying to fix that. We should make sure that the data comes together so that intelligence comes out of it. The inter -jurisdictional boundaries don’t come into the middle.

The other thing is that our defenders, our IT personnel, the analysts, the law enforcement agencies, they need to be smarter than the attackers. They need to be smarter than the attacker because the attacker will be always creative in figuring out the next attack. And if we have not been thinking ahead of time, we will be missing out. Finally, we have to make sure with all this new data that comes in, there is obviously… increased scrutiny, there is increased surveillance, and it doesn’t hinge on civil liberties and privacy of personnel. So these are the things that we have to be mindful of, but I think the sky is the limit, and I’m really happy that we are doing this, and it’s being used more

Amandeep Dhanoa

Thank you, sir, for such insightful perspective. Now, Mr. Martin Wilcox, Senior Vice President at TerraData and a global leader in AI -driven data analytics, will speak on AI -driven risk analytics of financial data for the law enforcement agencies. So please.

Martin Wilcox

and understanding the networks of bad actors. But to build these sort of graphs at India scale is incredibly complicated because graph analytics is an O -N squared problem. And so we need, again, scalable and performance systems and to bring the complex graph algorithms to the data in the data warehouse instead of trying to copy samples of data out of the data warehouse. If we have to cut the graph by taking small samples of data out of the data warehouse, then the risk is we miss the bad actors that we’re trying to catch. I want to talk a little bit now about next -generation AI use cases. And at Teradata, when we speak of next -generation AI use cases, we’re typically looking for four characteristics.

And we won’t go through all of those four characteristics today in the interest of time, but as a couple of the previous speakers have mentioned, one of the defining characteristics of a lot of next -generation AI use cases is this idea of multimodal data. This idea that images, audio, and text… And we can leverage those data in the kinds of ways that previously we’ve only been able to leverage structured transaction and event data. Actually, and I’ll come back and talk about that specifically in a moment or two. But this is another example that I thought might be interesting to some of you. This is an example from Brazil’s largest credit union, which is a company called Secredi.

And the challenge for this particular organization is in Brazil there is a large unbanked population that’s outside the formal economy. And obviously it’s very difficult to make credit risk decisions and lending decisions if you have a group of people that can’t prove their income. So the solution for Secredi is a sophisticated set of income estimation models. And they use those models to predict an individual’s likely income. And then they make credit lending decisions on the basis of that predicted income. Now this is a model that was trained outside of the data. database. And we have a technology called Bring Your Own Model, which enables us to consume models regardless of where they’ve been trained. So if you can train a model in PMML, in Mojo, or in ONIX, we can import that model, and then we can use Teradata as a parallel harness to speed up the training of this model.

And I think this is incredibly important, because we’re at a moment now in the industry where everybody wants to talk about model training. Because model training is exciting, and model training is cool. But actually, we don’t make any money when we train a model. We only make money when we can deploy that model to production and run inference, and in this case, inference at India scale, to actually change the way we do business. And this Bring Your Own Model technology that enables us to import models regardless of where they’ve been trained, so your data scientists can use the tools that make them the most productive, but you still have… We have a mission -critical platform that enables you to score models in production.

we get very significant speed up when we use this technology. From the numbers on this slide you’ll see that in this particular case for the income estimation models in Brazil we were able to run inference 25 times faster on the parallel data warehouse by bringing the complex processing to the data instead of the other way around. And 25 times faster is the difference between running this model once per day and running this model once per hour. And if you run the model once per hour you can change your entire business model you can change the cost of credit during the working day. Now this next example is an example of that multimodal phenomenon that we were talking about.

This is again another large Asian bank. This bank cares a lot about NPS, about Net Promoter Score. They consider that Net Promoter Score is the single most important leading indicator of customer intent. Whether the customer will leave or whether the customer will stay and consume more products. The problem this bank has is it has very little understanding of the drivers of Net Promoter Score. to score. But when we were working with them, we were able to establish that they were capturing 50 ,000 customer chats per week from the online banking application.

Amandeep Dhanoa

Thank you. Thank you. Thank you. Thank you. Thank you. to all our speakers of the first category. Thank you. Thank you. Thank you. Thank you. So as we move to category 2, we now shift from perspective to practice. Across India, regulatory and enforcement agencies have increasingly embedded artificial intelligence into their core systems. This segment brings together agencies that are not just exploring AI but actively deploying it to strengthen compliance, improve oversight and enhance citizen -centric services. For this, we have among us Shri Suvendu Pati from RBI, Shri Harsh Poddar, an IPS officer, Shri Ram Ganesh from CyberEye, Shri Amnesh Pandey from SEBI, and Shri Shashi Bhushan Shukla sir from Thank you. Thank you. Thank you. All the sessions being so interesting that I see most of the audience sticking to their seats.

So for the first session in this category, I introduce Siri Sovendu Pati. Sir is the Chief General Manager and Head of FinTech at the Reserve Bank of India. Sir will present Mule Hunter, an AI -driven initiative targeting mule accounts. Sir, please.

Suvendu Pati

So good evening, everyone, and thank you for the opportunity for having me. I would say that I would spend some time on what initiatives we have taken and then come to the mule hunter. First of all, recognizing the need of the governance and the financial sector has been one of the early adopters of artificial intelligence, given that most of the decisions are based on data. RBI had constituted a committee and it submitted its report last year in August and it has been placed on our website. It had recommended seven sutras or high level design principles. And there are 26 recommendations, which are 13 based on the innovation, enablement, as well as 30. you know exactly on risk mitigation.

And together with these sutras, there are six pillars under which these recommendations are classified. And these have, I would say that these seven sutras, I would come to the next slide, these have been adopted by the, this you can have a look. So I am happy to report that these seven sutras which initially we started as a recommendation or guiding principles for the financial sector has now been adopted by the government of India as the India’s design or India’s principles or sutras for AI governance across all sectors. So this is something, and on the right side, on the left side, you can see the recommendations of our RBI committee. And on the right side, you can see the recommendations which are published by the government of India on November 5th, outlining those very principles.

And so one of the foundational principles that we are talking about. is trust in the system. Any technology, it doesn’t matter how powerful it is, it will never be adopted unless it announces trust. So people should feel comfortable by the technology. And there are other, and we have another principle which cuts across every application is putting people first. You know, customers, people, citizens, they need to be protected at all times. And if in high -risk areas, high -risk decisions, one should talk about, you know, human in the loop and things like that. Other thing that we have also recommended or talked about is innovation over restraint. And in that, unless we, you know, experiment with this new technology or this technology, we would never realize the potential.

So there is a lot of apprehension in people’s mind that it is a probabilistic model, non -deterministic model, there may be mistakes. But unless we… still you know experiment do sandbox testing and do those kind of experiments we will never realize the true potential of this technology so there was a little nudges provided to the institutions do experiment do adopt there are other you know principles I would in the interest of time I would not talk about them so there are specific recommendations sorry and these are the some of the recommendations which are available on our website in the report if time permits you can you know go through at your leisure so this one of those recommendations is talking about bringing up something called the AI sandbox one of the critical elements why in India we need to do this is because entities face constraints on account of available of compute power infrastructure and also there are constraints with regard to availability of data so this is something that we recognize and as a public good we would enable AI sandbox by making cross sectoral data available and cross institutional data available in an anonymized way, which can be used by the entities and model developers.

And some of those things, capacity building, AI liability framework, another important element, how the customer needs to be protected. So, moving on with the other principles, there are risk mitigations, how the board policy should be formed, product approval process, cyber security measures, red teaming exercises. So, there are a host of, again, balancing 13 recommendations on the risk mitigation. Now, let me turn to the application that we are talking about today. This is one initiative, Mule Accounts in our banking system is a resource. It’s a real challenge. And given the number of, the huge volume of data that we have. We would never, humanly it is not possible to do it without the use of technology or machines.

So we have developed a MuleHunter .ai application, which is now implemented across 26 banks. Another three banks are in the process of implementing. And it has a lot of, you know, this, it has developed 857 features, which have been identified so far. And this is, you know, getting better and better as the model is getting trained across institutions. So out of these 857 features, let’s say for a bank like State Bank of India, only 50 features may be very critical. Whereas for other banks, like say RBL Bank or Indusint, another set of 50 features would be important. So this itself is providing insights. And based on our analysis, our understanding. and it is relatively in the progressing stage of implementation, these are the ways it is getting implemented over a period of time and currently it is deployed on -prem within each bank.

So the data really doesn’t go out of the bank themselves. But there is a central aggregation service that we are running which would take the intelligence from the features to the central aggregation model. So what we have identified, there are those insights which are predicted and we are rule -based engines, the banks which were implementing so far, they were giving 20 to 30 % level of accuracy. But this mule hunter or AI -based models, the accuracy level has significantly gone up in some institutions above 90, in some institutions above 80 and so on and so forth. And so on and so forth. As somebody said, the rule -based systems, they are handicapped. They were a handicap that human element would be required to analyze a large volume of data.

But here this number is getting reduced. So what, for example, if we have found out that there are patterns like around midnight when the customer support is not there, a lot of mule transactions take place. This is a new feature which could be found out. Similarly, there are accounts where it is remaining dormant for a long time, suddenly gets active, receives a barrage of payments, receipts and debits happen and then it gets again dormant. So these kind of pattern detections were not possible earlier. And for example, even if those accounts where it is detected like it is a salary account, likelihood of it getting classified as a mule account is very low. So some kind of BRE engine is filtering those kind of accounts.

And flagging only returns. So this is a very common problem. And the reason why it is so common is that it is not a very common problem. And it is not a very common problem. And it is not a very common problem. And it is not a very common problem. need to do a enhanced due diligence after this flag is done. We are working closely with I4C and our one limited study which has revealed that those accounts which are predicted by Mule Hunter, banks initially, some of the banks classified them as not Mule after doing the enhanced due diligence. But within a month or two, we start seeing I4C complaints on those very accounts which are flagged and such ratios are ranging up to 60%.

That means it gives us the confidence that the model is identifying correctly the Mule accounts whereas banks constrained by their own branch banking and identification systems, they are not classifying it correctly. So had we predicted, had we done this exercise in some sample, one bank we took as a sample, we could. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts. So we see that around 75 % of the Mule accounts are Mule accounts.

So we see that around 75 % of the Mule accounts are Mule accounts. to 100 crores of money could have been prevented if the bank had classified this as a mule on the day zero and frozen those debit freezers. So these are some of the early insights that we are getting and we are building it as we progress. And the future is what we talk about is a digital payments intelligence platform that we are aiming at a real time transaction scoring mechanism means at the time of transactions going through the score would be provided to the banks whether to allow this transaction or not. So this is mule account detection is a once a crime has been committed we are trying to move it to a preventive action.

So this is where again AI is going to help us a lot of technology and working with you know partnering with telecom. Mobile numbers which are suspect. numbers. So those kind of filtering and smart registry is being built. I4C is also providing us insights. So this is an ecosystem building as a public good. We are not only giving directions, but we have soiled our hands in building this tool, which is now getting implemented on scale. But yes, there is a lot of improvements that can be made with the partnership across all the banks. In the Thank you. Thank you. If you are talking about the Supreme Court case, which talked about the digital arrest cases and has formed an expert committee in this.

And Reserve Bank is also a part of that committee. But much before the Supreme Court gave this direction, this initiative was already on. It is not something that post -Supreme Court direction that we have started building this. This work was undertaken almost one and a half years back. Over a period of time, 26 banks have implemented this and more are implementing. And it is a work in progress. It gets refined as we speak. So there are newer, as I said, new initiatives are also in the pipeline. And we are also working alongside the banks how to move from a manual. Based on due diligence procedure. to a hybrid of automated and human intelligence -backed enhanced due diligence process.

That would be the ultimate proof of preventing these frauds and preventing the money, hard -earned money of the gullible citizens.

Amandeep Dhanoa

Thank you, sir. We keep all the questions in answer for the end of the session. Now, quickly, I call upon Sri Harsha Poddar, Indian police service officer and an award -winning innovator in AI -driven policing. Policing and Shri Ram Ganesh, cyber security expert and founder of CyberEye, to present cyber crime enforcement in action. Gentlemen, the floor is yours.

Ram Ganesh

crime and it was handed to an investigating officer, you had a series of supervisory meetings that would take place at the rank of the deputy SP, the additional SP, in order to determine the path of the investigation. Today what is happening is that this co -pilot is able to ingest the FIR, all of the documentations of the investigation, and generate an investigative path that is compliant with the standard operating procedures laid out by that particular state government, in this case Maharashtra, as well as the High Court and Supreme Court judgments that outline what the best practices in that kind of investigation are. So, put broadly, what the co -pilot does is that there are four essential tasks that it does.

After having generated this path, it also sends out a series of routine legal requests that we require for most investigations. These could be asks for telecom data. These could be asks for forensic data. It also makes sense of digital forensics, by which I mean telecom data in organized crimes, as I’m sure for those of you who worked in tax investigation are aware, there are vast volumes of telecom data that we garner, which we are able to analyze using the copilot. And then we also use open source intelligence, which, again, in police, we use a fair amount of. So different open platforms, Facebook, PhonePay, Google Pay, etc., it’s able to garner data from these that’s open source available and is then able to make that a part of the investigation.

Essentially, this is what is happening. You have an adaptive investigation part that is unique to that particular case. So remember, it’s not just an instance where it has spelt out or replicated the SOP for you. It has adapted the SOP and the judicial pronouncements on that particular head of cases and adapted it to that case. So that’s that’s what what it actually does. In terms of in terms of case ingestion and how this exactly works is it ingests the FIR to start with. It also provides victim assistance, for example, unfreezing of accounts and volumes of money that have been frozen in cybercrime cases. It generates case diaries, which is a day to day progress of the investigation itself, provides guided investigation paths, which are compliant, as I said, to standard operating procedures.

And it also profiles people on the basis of open source intelligence. Now, in my own district, as SPF Nagpur Rural, we have trained over 233 investigating officers for this, using which over 467 cases have been investigated. Over the past six months before the launch by by Mr. Satya Nadella and our. Chief Minister. The co -pilot has actually enabled us to win a series of governance awards within the state of Maharashtra as well, but that’s not so important. What’s important here, and I also want to doff my hat a little bit to the training process that’s important. When we are onboarding systems such as this, it’s important for the institution doing so to create space for training. I know, having been a beneficiary of it myself, that the Income Tax Department lays a lot of stress on training from all ranks onwards, something that we can learn within the police department.

But this is something that we stressed upon very, very substantially, and that’s been useful, and has also reduced resistance within organizations in order to onboard it and be able to use it. I’ll end by concluding four basic technologies that are available from the artificial intelligence silo. In Marvel, that’s the kind of technologies that we work upon. First is large language models, which are, I think it was spelled out in the first session, which is essentially artificial intelligence models that have been working with large amounts of text and are able to interact with government in very akin to a human being. The second is graph neural networks, which are artificial intelligence systems that make sense of siloed sets of data and the relational analysis between them.

So, in organized crime, that’s very useful in terms of being able to do a hub and spoke of who’s at the center of that crime. In Maharashtra, we have an act called Makoka, as you might be aware about, where organized crime, you need to be able to find out who the center of the gang is. Third is agentic artificial intelligence, which is co -pilots such as this, which are triggering workflows and actually walking individuals through them. And the last is big data analytics, where there’s structured analysis of large sets of data. That’s the kind of work that we’ve been doing at Marvel and this is an instance of that. I’ll end with that. It’s been a pleasure and a privilege.

Thank you very much. Jai Hind.

Amandeep Dhanoa

Thank you, sir. Now I introduce Sri Avnish Pandey, Executive Director at SEBI and a national voice on technology strategy and cybersecurity governance. Sir, please.

Avneesh Pandey

Thank you. First of all, a very good evening to all of you and it’s indeed a great privilege to be here and I thank CBT for giving this opportunity. For past two days, we have been listening to a lot of AI -based initiatives all over the place, but something that really had stuck with me at SEBI for some time back is The most important is to build capacity in undertaking these AI initiatives. And to that effect, we have truly democratized the AI development within the organization. And I take quite a pride in introducing some of the names which I have here in the crowd. Mr. Sandeep Kriplani, Mr. Rohit Saraf, Vikas Komera, Rajuddin Khan, and Pramit.

Pramit is the youngest of them. And I’ll tell you why this is important is because some of the initiatives that I’m going to present today to you have been handcrafted by these intellectual minds, and they are not from the IT department from SEBI. So that’s very important. It’s truly democratized to that extent. Yeah, so from SEBI’s perspective, we have quite a broad mandate to protect. The interest of the investors. to promote the development and to regulate the securities market. It’s a fairly large mandate that we have. And to that effect, we craft regulations and seek compliances. So compliances are a major part of our regulatory processes. We also conduct investigations and initiate enforcement proceedings from the data that we collect from various sources.

Going forward, we also adjudicate issue directions and libya penalties. Why am I saying this? This is to say that we have varied use cases within the organization where we have started to use the power of AI. There are four use cases that I would like to mention over here that have been kind of doing good in terms of generating valuable output for us. First of all is this. This is the RIDAR, which is a tool. This is the RIDAR. which ensures very proactive compliance for the advertisements that are being issued by a regulated entity specifically mutual funds so the ocean is a very important tool which is able to track the miss the the context that are unregistered and misleading fin influences are putting onto the social media in for much is the workflow intelligence that we have built to ensure that our investigation processes are more efficient and we are able to undertake that activity faster security compliance and audit come security cybersecurity compliance audit is some tools that we have built to ensure that the cyber security compliances that are being sent to say we are well read and we are making some good meaning out of it.

So I’ll take one by one of them. Well I’m slightly cognizant of the time that I have in hand. So first of all is the radar which I said takes care of all the advertisement that mutual fund industries are putting in. The tool basically looks into whether the advertisement which is put in is compliant to the regulatory requirements as mandated by the code of conduct. Some of the non -compliance that this tool is able to capture is illustrated here of which most of the compliance that we have caught is in terms of non -disclosures and not ensuring the disclaimers adequately put. Moving next is Sudarshan and which is trying to combat lot of financial frauds which also includes investment frauds.

So this is a tool which is able to capture all the non -compliance frauds that are part of the securities market domain and we are involved with that. Our media monitoring cell in SEBI has flagged nearly one lakh instances of misleading contents on this platform. To strengthen our approach on this, we built this product called Sudarshan, which is doing a continuous monitoring. It’s a multi -modal tool and works on multiple languages as well. Knowing that some of these scrupulous guys are using the capability of languages to defraud people, it has got enhanced detection capabilities and which we validate against with the data which is present within SEBI. By this we are able to try to figure out the financial misinformation and ensuring the financial information.

integrity. Infomerge as we call a tool that is for our investigation process as you all know the investigation process starts from case initiation, data collation, data analysis and report generation. By using this tool we are able to systematize all the data which is collected from various sources into one format. We are able to look into the company profile designations and financials of a particular company. Also are able to figure out what are the corporate announcers that are announcements that were made during the time of the investigation period. Visualization is the effect by which people are able to see the pattern and the tool has got a very innovative tools to give that finally the report writing.

From one investigating officer to another investigating officer there has been always found a variance. to ensure that we have a standardized mechanism to get a report and get it in an orderly manner. This particular tool lets do the last part of writing the report. Of course, those reports again go for a read as human in loop. Coming to the last system that we have very recently launched, SEBI initiated a cyber resilience and cyber security framework based on which we have started to get a lot of compliances, which means controls, levels, and artifacts that are being submitted for those compliances. So this particular tool autonomously reads those compliances and flags where the audit report has gotten missing.

So we have got a very novel three architectural -based framework so that if one particular model, is hallucinating other models, so that if one particular model is hallucinating other models, so that if one particular model is hallucinating other models, do take care of and give the reasonable meaningful analysis. So what it is ending up as apart from giving a dashboards and real -time visibility it is ensuring that at SEBI we at any point in time are able to do a relative analysis of all our intermediaries and know where they stand in cyber security measures. Yeah so that was a very quick sorry if I’ve been too fast with the other time I was trying to keep pace with the seconds that were clicking thank you

Amandeep Dhanoa

Thank you sir it ended in a very clockwork type manner. Now we are coming to the final technical session. Shukla principal commissioner at CBDD a key architect behind data Analytics Cell and Saksham Nudge Initiative will speak on the use of AI for ease of compliance in tax administration. So, we look forward to your insights.

Shashi Bhushan Shukla

Thank you, Aman. Good evening, everyone. Now, I think we are almost at the closing of this session and we are left with maybe five, seven minutes. But this has been the last session. We can take some more minutes, I think. So, this is the journey of Income Tax Department. The Income Tax Department has been pioneer in the adoption of state of art technology. And we have started using technology quite early. I have given a few examples. So, let us take a look at the graph of last 25 years. so if we see the filing of TDS return started in 2004 followed by filing of returns and then tax net has been launched CPC has started in 2009 and then CPC TDS and that as has been mentioned by Professor Mausham that the income tax department is highly taxpayer service oriented and we show them the financial information which is available with the department we started showing 26 years from 2017 onwards and then we have automated several processes of income tax department and the department has launched online issuance of form 16 the faceless assessment e -filing portal in 2021 and national cyber forensic policy was launched in 2024 and last to last year we started initiative called Nudge which is the non intrusive uses of data to guide and enable taxpayers and in the first session my colleague has talked about insight 2 .0 and there are several projects which are now getting updated with the state of art technology including the uses of artificial intelligence so to enhance the taxpayers experience for filing the tax for the ease of tax compliance so the department is using the technology including the artificial intelligence keeping the taxpayer in the heart of it so if we talk about the data what income tax department has there is a vast data which we have from the several sources and one of for example the pan 80 crore people are already issued pan now it might have reached a little more itrs are filed more than 9 crore 12 crore people are paying taxes and then sft is more than 650 crore data fields we get for the specified financial transactions which are populated in AIS which is a huge data which is available with the department and then we collect data under rule 114B, form 60 is submitted, then a specified financial transaction 61 A is submitted.

We also receive information from foreign jurisdictions. More than 100 countries share the foreign asset information, foreign income information as well from with India. So we receive around 50 lakh pieces of information every year under CRS and FATCA framework which is automatic exchange of information and we also share information in respect of non -residents who are having the foreign assets in India, who are having the assets in India with the respective foreign jurisdictions. It is around 1 crore pieces of information which is transacted. So this is a lot of information which we have, lot of data we have including the assessment order, including the appeal orders. This data can be utilized within our projects which we have inside 2 .0 and ITBA, CPC for generation of intelligence, for better compliance, for the awareness of the taxpayers and for the information of taxpayers for the payment of correct taxes.

So this initiative which is a nudge initiative was started 2 years back where we are using the data which is coming from various sources including from foreign jurisdiction and this is being used for educating the taxpayer, for guiding the taxpayer to comply with the tax laws and to correct their filing, to correct the, declare their correct assets and income. So this NUZ has the seven step strategy which is in the word Saksham which is meaning in Hindi Saksham means empowered in English. So this is basically this strategy is empowering the department as well as empowering the taxpayers for filing the correct tax. So this seven step strategy is basically how we are using the data which is a Sankalan basically compilation and collection of data as we have discussed we have lot of data which is coming from diverse sources.

Then Anushandhan how we are analyzing and doing research over the data for generation of insight and intelligence for the risk identification and how we are acting on the data which is actionable interventions for the targeted outcomes. Then we are doing the communication basically how the taxpayers to inform to the taxpayers that this may be which you need to review. your filing and maybe you will have to change your income or your computation. So this is where we are using the behavioral insight and guiding taxpayers to pay the correct taxes. And at the same time, we are also hand -holding, we are also facilitating them through the fifth step, which is called ASTAK and then ADHIKAR enablement of the taxpayer for the payment of taxes.

We have the legal changes have been brought in the Income Tax Act, where now the taxpayers are allowed to update their ITR by payment of additional taxes and to correct their income. So this is possible now with the payment of additional tax. So now we use this asking the taxpayers, it can be for four years, you can come out with the right taxes at the end. basically it is a preemptive exercise where no punitive action is taken against the taxpayers no penal consequences so the taxpayers are allowed to change their itr which has been filed originally and then the this whole cycle is then completed through evaluation where we take the feedback of taxpayers the responses we receive they are analyzed and all these steps can be further improvised so that the next nudge or when when we can communicate with the taxpayers with a better information and the better communication and this strategy has actually yielded very good result where the taxpayer have responded well it has been received well the taxpayers the trust owned by the department has given a very good result so I have given few cases case studies here some outcomes you which has also been discussed by chairman in his opening remarks So if we see the current foreign asset nudge which has been carried out in the month of December and we have sent messages to the taxpayer stating that you may have some foreign asset which has not been reported in your ITR and the taxpayer have then revised their ITR and 1 .57 lakh taxpayers they have disclosed their foreign assets which is worth 99 ,000 crore.

So this this exercise shows that once the taxpayers are informed that this is what the department knows about and you have missed it while filing your ITR they may come forward and they can declare. So this has resulted into 6 ,540 crore of additional income and 99 ,000 of crore of assets. Similarly we have also taken up few more exercises. The other one which I have mentioned is regarding the bogus donations. And the bogus deductions claimed by the taxpayers by taking certain fake receipts from unrecognized political parties and some of them are maybe the entities, the NGOs, which are not eligible for donation. So here also result has been quite encouraging. 6 .96 lakhs taxpayers, they have revised their ITR and they have withdrawn their claims worth 9879 crore and which has given the department additional taxes of 1758 crore.

And if we see how these campaigns have actually resulted into the behavioral change in the taxpayer. So these two graphs. Explain it. If you see the foreign asset behavior pattern, how the taxpayers have increased the filing of taxpayer foreign asset. Now it has increased from 1 .59 lakhs, which was before the Nudge campaign started. Now it is 4 .7 lakhs. So almost three times increase in a span of two years by this Nudge campaign. And similarly, the claim of deduction has also gone down in last two years. And it has reached to from almost half 7 ,400 crore to almost 4 ,000 crore. So this is what is the power of data and the data analytics, which we do by using the technology.

And as has been discussed in Insight 2 .0 project, that with the use of artificial intelligence and better technology, we will be able to identify the anomalies much faster. And we can Nudge taxpayers maybe at the time of filing of return or much before. So when the return is. process. This also I because there is many representative from law enforcement agencies, I wanted to discuss this particular topic as well. The India is leading one project which is based on the misuse and threats of AI in the in the tax crime and financial crimes. So this it is a 17 countries group which is being led by India and we request all the LEAs if they have come across any misuse or any challenge of AI the risk of AI in their regular working and the administration of their institutes we can they can communicate with us so that we can at the international level we can take it forward in a collaborative manner and we should try to find solution to various problems.

The misuses which have been reported so far are basically the use of generic synthetic identities and deepfake documents and fabrication of sometimes court orders. So these are the AI -assisted misuses which are happening and which are basically a challenge for all law enforcement agencies. The RBI has come out with the Mule Hunter software. Maybe for the synthetic identity identification also we can use AI where it can further support the law enforcement agencies in identifying such misuses where we can take some preemptive measure before basically this attack takes place. So this is, I will request, we will also send communication but this can be kept in mind that this project is going on. And if we talk about the future use of AI in department, basically if we see how we will be able, to enable our taxpayers to pay correct taxes at the right time without any penalty or without any additional tax.

So this is what we are trying in the department that we should use AI in an informative manner. It should also be able to cross validate various data sources so that if there is any anomaly, it can be predicted on real time basis and in a proactive manner. So when I’m saying real time basis, so maybe at the time when the taxpayer is preparing for filing of return, that time we can show the data, the financial data which we have received from third party. At the time of filing of return, we can use prompts where the taxpayers can be informed if they are making any wrongful claims or if they are reporting or not reporting assets, which may be in the knowledge of.

The department and then. once the return is filed before the verification we can further analyze the returns and we can prompt the taxpayers to correct before the processing or after the processing then we can carry out the further the nudge exercise so all this nudge will have a complete 360 degree 360 degree program where we can enable taxpayers right from the beginning to pay the correct taxes so this is what we plan to use the AI where the taxpayer services are concerned and for the administration obviously we are making ourselves capable training our manpower and adapting the technology to to serve better the country and also to collect revenue correctly on time with this I will end and this is the closing thought so it is for everyone to read thank you so much

Amandeep Dhanoa

Thank you, sir. Thank you, sir. And now I invite Shri Mahadevan K, Joint Commissioner of Income Tax for the vote of thanks.

Justice R. Mahadevan

Respected Honorable Chairman, Distinguished Speakers, Eminent Guests, Colleagues and Participants. It is my privilege to propose vote of thanks at the conclusion of this highly interesting session. Today’s deliberations have clearly demonstrated that artificial intelligence is no longer aspirational, it is operational. I begin by expressing sincere gratitude to Honorable Chairman CBDT Shri Ravi Agrawal, sir, for his visionary opening remarks. Particularly, sir highlighted how the new Income Tax Act would be tech -driven to reduce the litigations over interpretations. And sir emphasized the use of AI in ethical manner with ensuring accountability and transparency. Sir said the strategic direction for AI enabled… Sir said the strategic direction for AI -enabled trust -based governance. The session on Project Insight 2 .0 by Mr.

Srinivasan T, Sri Abhishek Kumar and Sri Ramesh Reveru demonstrated how Insight 2 .0 is reshaping the taxpayer’s life cycle through AI -enabled prefiling, conversational chatbots, behavioral nudge, AI -based litigation risk assessment and vulnerability prediction. The vision of a sovereign SLM for the tax domain stands out as a transformative initiative. The session on a Roadmap on AI for Law Enforcement by Professor Mausam outlined AI applications across preventive, predictive and investigative domains using visual, textual, financial and multimodal analytics. He highlighted use cases such as facial recognition, anomaly detection and crime forecasting to enable intelligence -led enforcement. He emphasized human and AI teams. And explainability. bias mitigation and civil liberties as essential safeguards. These aspects brought conceptual clarity and policy depth to the discussion.

The session on AI -driven risk analytics by Mr. Martin Wilcox highlighted how data analytics enhances enforcement through graph analytics, in -database model deployment and leveraging vector stores and multimodal AI for intelligent querying. The transition from a system of record to a system of intelligence was particularly compelling. The session on Maha Crime OS AI by Sri Harshiya Podasa, Sri Ram Ganesh and Sri Vikram Kale powerfully addressed the investigation crisis and showed how AI enables automated crime handle extraction and guided investigation workflows, combined with 360 -degree profiling, integrating with CDR and other tools. Particularly, the emphasis on human -in -the -loop architecture ensures accountability, all -on -set efficiency. The session on AI for ease of compliance by Sri Sashi Bhushan Shukla sir illustrated the difference between AI and AI -driven risk analytics.

The income tax department’s evolution towards AI -driven platform such as Insight 2 .0, ITBA 2 .0 and Saksham Natch. Sir explained how large -scale data integration and cross -validation enable risk -based proactive and real -time compliance support. The focus was on shifting from enforcement -led systems to AI -enabled trust -based voluntary compliance and taxpayer -centric services. The session on Mule Hunter by Sri Sumanthapati highlighted free AI framework with its seven sutras, six pillars and structured recommendations balancing innovation and risk mitigation. The presentation on Mule Hunter demonstrated how advanced ML models, graph analytics and real -time risk scoring are strengthening Mule account deductions. The proposed DPIP collaborative platform further reflects a forward -looking ecosystem -wide approach to AI -enabled financial integrity and supervisory resilience.

The session on AI -driven regulatory enforcement by Sri Avinash Pandey highlighted how SEBI is operationalizing AI across enforcement including proactive compliance review, real -time detection of misleading, financial content and its influencers. and AI -driven cybersecurity audit compliance. These initiatives reflect how AI can strengthen investor protection while ensuring regulatory prudence. A special word of appreciation to Srimati Amandeep Dhanoa for her engaging and energizing moderation. Today’s session reaffirmed that AI enhances risk intelligence, it improves service delivery, it strengthens regulatory oversight and it enables data -driven governance. On behalf of CBDT, I extend heartfelt gratitude to all speakers, institutions, organizations and participants. A special word of appreciation to the principal CCIT Delhi headquarters team and the DGIT investigation team, Delhi, for the dedicated support and medicalist coordination in organizing this event.

Thank you all for making this session impactful and forward -looking. With this, I formally conclude the session. Thank you all. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (28)
Factual NotesClaims verified against the Diplo knowledge base (2)
!
Correctionhigh

“Amandeep Dhanoa opened the symposium by welcoming the audience and emphasizing that artificial intelligence is reshaping every domain of governance.”

The knowledge base records the opening speaker as Amandeep Dhanua, an Indian Revenue Service officer, not Dhanoa, indicating the name in the report is misspelled [S1] and [S4].

!
Correctionhigh

“Chairman Ravi Agrawal (also spelled Agarwal) outlined the forthcoming Income‑Tax Act 2025, which will become effective on 1 April 2026 and will create a rule‑based, technology‑driven tax administration.”

The knowledge base identifies Ravi Agrawal as the Editor-in-Chief of Foreign Policy Magazine and host of FP Live, with no reference to him being the Chairman of the Income-Tax Department or authoring an Income-Tax Act 2025, suggesting the report’s attribution is inaccurate [S7].

External Sources (93)
S1
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Amandeep Dhanoa- Indian Revenue Service Officer of 2018 batch, Moderator of the symposium
S2
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — 1133 words | 59 words per minute | Duration: 1151 secondss Thank you sirs. We are already running behind by 20 minutes …
S3
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Thank you. Thank you, sir, for setting the tone and direction so clearly. Now, as we begin with category one, we turn to…
S4
https://dig.watch/event/india-ai-impact-summit-2026/ai-driven-enforcement_-better-governance-through-effective-compliance-services — Thank you. Thank you, sir, for setting the tone and direction so clearly. Now, as we begin with category one, we turn to…
S5
Announcement of New Delhi Frontier AI Commitments — -Abhishek: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S6
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Shri Ravi Agrawal- Chairman, Central Board of Direct Taxes; Indian Revenue Service Officer of 1988 batch with over thre…
S7
Defending the Cyber Frontlines / Davos 2025 — – Ravi Agrawal: Editor-in-Chief of Foreign Policy Magazine, host of FP Live Ravi Agrawal: Hi, everyone. My name is Ra…
S8
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Avneesh Pandey- Executive Director at SEBI; national voice on technology strategy and cybersecurity governance
S9
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Speakers:Avneesh Pandey, Other speakers Speakers:Avneesh Pandey, Shri Ravi Agrawal Speakers:Shri Ravi Agrawal, Profess…
S10
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Justice R. Mahadevan- Joint Commissioner of Income Tax
S11
Indias Roadmap to an AGI-Enabled Future — VLSI and as it turns out there are a host of issues that need if you ask me serious discussion and brainstorming. Primar…
S12
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — 668 words | 144 words per minute | Duration: 276 secondss Respected Honorable Chairman, Distinguished Speakers, Eminent…
S13
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Ramesh Revuru- Global Head of Engineering at LTI Mindtree
S14
https://dig.watch/event/india-ai-impact-summit-2026/ai-driven-enforcement_-better-governance-through-effective-compliance-services — insights who has been instrumental in shaping the income tax department’s digital ecosystem through project insight and …
S15
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Suvendu Pati- Chief General Manager and Head of FinTech at the Reserve Bank of India
S16
https://dig.watch/event/india-ai-impact-summit-2026/ai-driven-enforcement_-better-governance-through-effective-compliance-services — All the sessions being so interesting that I see most of the audience sticking to their seats. So for the first session …
S17
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Ram Ganesh- Cyber security expert and founder of CyberEye
S18
Announcement of New Delhi Frontier AI Commitments — -Ganesh: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S19
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Shashi Bhushan Shukla- Principal Commissioner at CBDT; key architect behind data Analytics Cell and Saksham Nudge Initi…
S20
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Speakers:Abhishek Kumar, Shashi Bhushan Shukla Speakers:Shashi Bhushan Shukla, Suvendu Pati Speakers:Suvendu Pati, Sha…
S22
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Martin Wilcox from Teradata began addressing scalability challenges in graph analytics for identifying networks of bad a…
S23
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -T. Srinivasan- Technology Lead at LTI Mindtree; brings three decades of enterprise technology leadership
S24
Journal of International Commerce and Economics — – – Online Casino City. 2008. Costa Rica, Antigua file for WTO arbitration. Press Release, February 1. http://online….
S26
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Speakers:Professor Mausam, Martin Wilcox Speakers:Professor Mausam, Ram Ganesh, Shri Ravi Agrawal Speakers:Professor M…
S27
How Small AI Solutions Are Creating Big Social Change — But what very few people actually know is that the actual performance of what we do at the moment is not 99 .999 % So mo…
S28
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S29
Ethical AI_ Keeping Humanity in the Loop While Innovating — Artificial intelligence | Building confidence and security in the use of ICTs Ghosh stresses that accountability requir…
S30
National Disaster Management Authority — Complete automation in AI systems that deal with human lives poses significant risks. Messages sent through early warnin…
S31
Safe and Responsible AI at Scale Practical Pathways — Srivastava argues that data governance policies must be built into the technical infrastructure and automatically enforc…
S32
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — -Regulatory Approach and Framework: India’s Reserve Bank of India (RBI) has adopted a progressive, principles-based appr…
S33
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Regulatory Approach and Framework: India’s Reserve Bank of India (RBI) has adopted a progressive, principles-based appro…
S34
Secure Talk Using AI to Protect Global Communications & Privacy — Ratan Kumar Kesh explains that banks have developed effective systems to detect unusual transaction patterns using AI an…
S35
Towards a Safer South Launching the Global South AI Safety Research Network — Mr. Singh explains that the network launch aligns with the New Delhi Frontier AI commitments where all models committed …
S36
Ethical AI_ Keeping Humanity in the Loop While Innovating — Well, first of all, excuse me for the voice, but that’s it. Exactly, but thanks to technology, you can hear me anyway. S…
S37
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S38
The fading of human agency in automated systems — To address concerns about automation, policy and governance discussions often invoke the concept of ‘human-in-the-loop’ …
S39
WS #283 AI Agents: Ensuring Responsible Deployment — User control and human oversight are essential safeguards, particularly for high-impact decisions that are difficult to …
S40
Pre 12: Resilience of IoT Ecosystems: Preparing for the Future — As AI becomes integrated into IoT systems, proper governance frameworks are essential to ensure ethical and trustworthy …
S41
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Bilel Jamoussi:Since you mentioned Meta, I’ll go to Melinda and ask you about Meta has made significant contributions to…
S42
Al and Global Challenges: Ethical Development and Responsible Deployment — Waley Wang:Ladies and gentlemen. Dear friends. Good afternoon. My name is Willy. As a member of CCIT. It’s my honor to d…
S43
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Summary:Srinivasan advocates for sovereign, domain-specific SLMs with complete data control within individual systems, w…
S44
Policies and platforms in support of learning: towards more coherence, coordination and convergence — – (a) Internal learning is part of the staff’s longer-term engagement with their organization on a learning and developm…
S45
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S46
Agents of Change AI for Government Services &amp; Climate Resilience — This comment shifted the guardrails discussion from seeking perfection to accepting probabilistic nature while maintaini…
S47
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — This comment addresses a fundamental tension in AI deployment – the mismatch between probabilistic AI behavior and deter…
S48
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — “As you would all be aware, targeted nudges have led to 1 .11 crore taxpayers filing updated returns with a revenue impa…
S49
Overview of AI policy in 15 jurisdictions — Summary China remains a global leader in AI, driven by significant state investment, a vast tech ecosystem and abundant …
S50
Operationalizing data free flow with trust | IGF 2023 WS #197 — However, while emphasizing the importance of the principle of free flow of data, it is cautioned that this principle sho…
S51
WS #84 The Venn Intersection of Cyber and National Security — Data localization and privacy policies Monojit Das discusses the challenges arising from differences in data localizati…
S52
Cross-Border Data Flows: Harmonizing trust through interoperability mechanisms (DCO) — For instance, the United States gives control of data to the private sector, while Europe places individual rights over …
S53
Open Forum #7 Advancing Data Governance Together Across Regions — Tattugal Mambetalieva from Kyrgyzstan explained that her country deliberately avoids data centralization and localizatio…
S54
Secure Talk Using AI to Protect Global Communications &amp; Privacy — High level of consensus with significant implications for industry transformation. All speakers agree that traditional a…
S55
Secure Talk Using AI to Protect Global Communications & Privacy — Consensus level:High level of consensus with significant implications for industry transformation. All speakers agree th…
S56
WS #279 AI: Guardian for Critical Infrastructure in Developing World — Implement robust data governance and secure model development practices
S57
AI as critical infrastructure for continuity in public services — The discussion revealed that data sovereignty encompasses more than simple data localization. As Pramod noted, true sove…
S58
Discussion Report: Sovereign AI in Defence and National Security — Examples include the lack of transparency in ChatGPT’s training data and alignment process, with multibillion dollar law…
S59
AI governance struggles to match rapid adoption — Accelerating AI adoptionis exposingclear weaknesses in corporate AI governance. Research shows that while most organisat…
S60
MahaAI Building Safe Secure & Smart Governance — Thank you, Devroop. I think… we need to first take a holistic view of what are we trying to achieve with AI. The tagli…
S61
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — “basic agenda for this AI impact term is welfare for all, happiness for all.”[14]. “policy … power, electricity, water…
S62
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — man’s promise. It can enhance public service delivery, it can improve decision -making, it can optimize resource managem…
S63
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — The new Income Tax Act 2025, effective from April 1, 2026, will create a technology-driven ecosystem that simplifies lan…
S64
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — NUDGE initiative using seven-step Saksham strategy has yielded significant results with 1.11 crore taxpayers filing upda…
S65
Ethical AI_ Keeping Humanity in the Loop While Innovating — Well, first of all, excuse me for the voice, but that’s it. Exactly, but thanks to technology, you can hear me anyway. S…
S66
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Regulatory Approach and Framework: India’s Reserve Bank of India (RBI) has adopted a progressive, principles-based appro…
S67
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — -Regulatory Approach and Framework: India’s Reserve Bank of India (RBI) has adopted a progressive, principles-based appr…
S68
Secure Talk Using AI to Protect Global Communications &amp; Privacy — An audience question about government initiatives revealed evolving regulatory responses. The Reserve Bank of India has …
S69
https://dig.watch/event/india-ai-impact-summit-2026/ai-driven-enforcement_-better-governance-through-effective-compliance-services — And if we see how these campaigns have actually resulted into the behavioral change in the taxpayer. So these two graphs…
S70
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S71
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S72
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S73
Next-Gen Industrial Infrastructure / Davos 2025 — The tone was largely optimistic and forward-looking, with speakers enthusiastically sharing their visions and initiative…
S74
AI in Mobility_ Accelerating the Next Era of Intelligent Transport — The discussion maintained a serious, urgent tone throughout, driven by the gravity of India’s road safety crisis. While …
S75
Host Country Open Stage — The tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a profes…
S76
AI for Good Technology That Empowers People — The tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for so…
S77
Internet Governance Forum 2024 — During theWS #82 A Global South perspective on AI governance, Jenny Domino raised concerns about the reliance on AI-powe…
S78
WS #219 Generative AI Llms in Content Moderation Rights Risks — The discussion maintained a consistently serious and concerned tone throughout, with speakers demonstrating deep experti…
S79
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S80
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S81
How AI Is Transforming Indias Workforce for Global Competitivene — There are risks of over-automation without adequate human oversight and potential bias issues
S82
Laying the foundations for AI governance — – The need for collaboration between industry and regulators Lan Xue: Okay. I think my job is easier. I can say I agree…
S83
Comprehensive Report: European Approaches to AI Regulation and Governance — The discussion maintained a professional, collaborative tone throughout. Both speakers demonstrated mutual respect and a…
S84
How to make AI governance fit for purpose? — The discussion maintained a collaborative and optimistic tone throughout, despite representing different national perspe…
S85
From principles to practice: Governing advanced AI in action — – Balancing rapid technological advancement with necessary governance frameworks across different regional approaches A…
S86
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Alex Moltzau: Yes, so one thing that I didn’t mention that we are working on currently is also these AI regulatory sandb…
S87
Open Mic &amp; Closing Ceremony — The overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, wi…
S88
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S89
Launch / Award Event #159 Book Launch Netmundial+10 Statement in the 6 UN Languages — The tone was consistently celebratory, appreciative, and forward-looking throughout the session. Participants expressed …
S90
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S91
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — We make systems and making decisions about who receives public services, who qualifies for a loan, or who is flagged for…
S92
Day 0 Event #172 Major challenges and gaps in intelligent society governance — Ru Peng: Ladies and gentlemen, friends from Riyadh and Beijing, both online and offline, good afternoon. At present, …
S93
Main Session on Artificial Intelligence | IGF 2023 — In terms of incentives for adopting voluntary standards, they were seen to vary. Some incentives mentioned include trust…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Abhishek Kumar
1 argument112 words per minute194 words103 seconds
Argument 1
AI can streamline the taxpayer lifecycle, tag legal issues, predict litigation vulnerability and boost compliance (Abhishek Kumar)
EXPLANATION
Abhishek Kumar explains that AI can provide taxpayers with quick, accurate information, enhance NERJ campaigns, assess litigation risk, and use large language models to tag and link legal documents, ultimately predicting case vulnerability and reducing litigation.
EVIDENCE
He outlines steps such as quick availability of accurate information, more effective NERJ campaigns, AI-driven litigation risk assessment, tagging of assessment, appellate and judicial orders using LLMs, and predicting case vulnerability to enable retraction of litigation [88-97].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S2 outlines quick provision of accurate taxpayer information, enhanced NERJ campaigns and AI‑driven litigation risk assessment, supporting Kumar’s claim; S1 lists AI‑driven enforcement steps.
MAJOR DISCUSSION POINT
Tax compliance automation
AGREED WITH
Shri Ravi Agrawal, Shashi Bhushan Shukla, Justice R. Mahadevan
T
T. Srinivasan
2 arguments204 words per minute1420 words416 seconds
Argument 1
Sovereign, domain‑specific language models (SLMs) with LoRA adaptation provide secure, accurate tax‑service AI (T. Srinivasan)
EXPLANATION
Srinivasan describes the creation of sovereign, tax‑domain language models (SLMs) that are adapted using LoRA, allowing low‑cost training on tax‑specific data while keeping the data secure and within the department.
EVIDENCE
He explains that SLMs are built on top of regular LLMs, trained with LoRA using only 1-2 % of full training cost, ingesting vetted tax data, and employing vector databases and ontology to ensure secure, sovereign AI with multilingual capability [129-154].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S2 details the creation of sovereign tax‑domain LLMs using LoRA, low training cost and data staying within the department; S1 similarly describes the approach.
MAJOR DISCUSSION POINT
Domain‑specific LLMs
AGREED WITH
Shri Ravi Agrawal, Suvendu Pati, Martin Wilcox
Argument 2
LoRA‑adapted sovereign LLMs enable low‑cost, secure, domain‑specific AI with multilingual support (T. Srinivasan)
EXPLANATION
He further emphasizes that LoRA adaptation allows efficient fine‑tuning of large language models for tax purposes, delivering secure, cost‑effective, and multilingual AI services for the department.
EVIDENCE
Srinivasan details the LoRA technique, its low training cost, the use of RAD plus vector DB for retrieval and citation, and the multilingual, sovereign nature of the resulting model, highlighting its suitability for tax administration [129-154].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S2 explains LoRA‑based fine‑tuning for tax‑specific data, multilingual capability and cost efficiency; S1 reinforces these points.
MAJOR DISCUSSION POINT
Efficient model adaptation
S
Shashi Bhushan Shukla
1 argument137 words per minute1993 words867 seconds
Argument 1
Data‑driven “nudge” campaigns have yielded large revenue gains and improved voluntary compliance (Shashi Bhushan Shukla)
EXPLANATION
Shukla reports that targeted nudges, powered by AI, have prompted millions of taxpayers to file updated returns, leading to substantial additional revenue and greater voluntary compliance.
EVIDENCE
He cites that 1.11 crore taxpayers filed updated returns after nudges, generating over ₹8,800 crore, and that foreign asset disclosures worth ₹99,000 crore and foreign income of ₹6,500 crore were also reported, demonstrating the impact of AI-driven nudges [555-583].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S2 describes the NUDGE initiative’s seven‑step Saksham strategy and reports over 1.11 crore taxpayers filing updated returns generating >₹8,800 crore, confirming the revenue gains; S1 mentions targeted nudges.
MAJOR DISCUSSION POINT
AI‑driven behavioral nudges
AGREED WITH
Shri Ravi Agrawal, Abhishek Kumar, Justice R. Mahadevan
S
Shri Ravi Agrawal
3 arguments121 words per minute1337 words658 seconds
Argument 1
The new, technology‑driven Income Tax Act 2025, coupled with AI accountability, underpins trust‑based compliance (Shri Ravi Agrawal)
EXPLANATION
Agrawal explains that the Income Tax Act 2025 simplifies language and procedures, creating a technology‑driven ecosystem where AI can enforce tax rules consistently, thereby enhancing trust‑based compliance.
EVIDENCE
He notes that the Act reduces interpretative ambiguity, enables algorithmic implementation, and supports a rule-driven, technology-centric tax administration that fosters tax certainty and reduced litigation [32-35].
MAJOR DISCUSSION POINT
Tech‑enabled tax legislation
AGREED WITH
Abhishek Kumar, Shashi Bhushan Shukla, Justice R. Mahadevan
Argument 2
Ethical AI requires clear accountability, human oversight, safeguards and continuous training (Shri Ravi Agrawal)
EXPLANATION
Agrawal stresses that AI deployment in enforcement must be accompanied by strong accountability frameworks, human oversight, secure data, and ongoing capacity building to ensure fairness and legality.
EVIDENCE
He outlines the need for high-quality shareable data, secure systems, clear accountability, strong safeguards, and continuous training as prerequisites for responsible AI use in law enforcement [36-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S28 references EU Trustworthy AI guidelines emphasizing human‑centric, lawful AI; S29 discusses accountability and oversight mechanisms; S30 warns about risks of fully automated systems; S31 stresses built‑in data‑governance policies.
MAJOR DISCUSSION POINT
AI ethics and governance
AGREED WITH
Professor Mausam, Ram Ganesh, Justice R. Mahadevan, Avneesh Pandey
Argument 3
Relying blindly on AI without human validation can produce errors; humans must drive AI, not the reverse (Shri Ravi Agrawal)
EXPLANATION
Agrawal shares a personal anecdote about rapidly generating code with AI, emphasizing that while AI can accelerate development, human expertise must guide and validate outcomes to avoid blind reliance.
EVIDENCE
He recounts developing a functional application in five to six hours using AI-generated code, noting that despite speed, the solution required his own validation and that AI should augment, not replace, human effort [48-63].
MAJOR DISCUSSION POINT
Human‑centric AI deployment
A
Amandeep Dhanoa
1 argument59 words per minute1133 words1151 seconds
Argument 1
The symposium’s purpose is to explore AI for easier compliance, reduced disputes and trust‑based governance (Amandeep Dhanoa)
EXPLANATION
Dhanoa frames the symposium as a platform to discuss how AI can simplify taxpayer compliance, lower dispute rates, and foster trust‑based governance across the tax ecosystem.
EVIDENCE
She states that the session will explore AI for easier compliance, lower disputes, and trust-based governance, positioning industry and academia as key contributors to this agenda [78-82].
MAJOR DISCUSSION POINT
Symposium objectives
S
Suvendu Pati
2 arguments146 words per minute1812 words740 seconds
Argument 1
RBI’s seven AI sutras, sandbox framework and risk‑mitigation pillars guide responsible innovation (Suvendu Pati)
EXPLANATION
Pati outlines RBI’s AI governance framework consisting of seven guiding sutras, six pillars, and a sandbox to promote responsible AI innovation while managing risks.
EVIDENCE
He describes the seven sutras, 26 recommendations, six pillars, the AI sandbox for cross-sector data sharing, and risk-mitigation measures such as AI liability frameworks and red-team exercises [369-393].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S32 and S33 outline RBI’s seven AI sutras, 26 recommendations and sandbox framework for responsible AI innovation.
MAJOR DISCUSSION POINT
AI governance framework
Argument 2
“MuleHunter” AI detects mule accounts with >80‑90% accuracy across banks, enabling real‑time transaction scoring (Suvendu Pati)
EXPLANATION
Pati presents MuleHunter, an AI system that identifies mule accounts using hundreds of features, achieving high accuracy and enabling real‑time scoring of transactions across multiple banks.
EVIDENCE
He notes that MuleHunter uses 857 features, delivers 80-90 % accuracy, is deployed in 26 banks, and provides real-time transaction scoring to prevent fraud [398-410].
MAJOR DISCUSSION POINT
AI‑driven fraud detection in banking
AGREED WITH
Martin Wilcox, Ram Ganesh, Professor Mausam
A
Avneesh Pandey
3 arguments133 words per minute1081 words485 seconds
Argument 1
SEBI democratizes AI development, building internal capacity and empowering non‑IT teams (Avneesh Pandey)
EXPLANATION
Pandey explains that SEBI has broadened AI development beyond the IT department, involving a diverse team of analysts and engineers to create AI solutions internally.
EVIDENCE
He lists team members from non-IT backgrounds, emphasizes that AI initiatives are handcrafted by these individuals, and highlights the democratization of AI within SEBI [504-513].
MAJOR DISCUSSION POINT
Internal AI capacity building
AGREED WITH
Shri Ravi Agrawal, T. Srinivasan, Suvendu Pati
Argument 2
SEBI’s tools (RIDAR, Sudarshan, Infomerge) automate compliance monitoring, fraud detection and cybersecurity audit (Avneesh Pandey)
EXPLANATION
Pandey describes three AI‑driven tools—RIDAR for advertisement compliance, Sudarshan for multimodal fraud detection, and Infomerge for investigation workflow automation—showcasing SEBI’s comprehensive AI suite.
EVIDENCE
He details RIDAR’s detection of non-disclosure in advertisements, Sudarshan’s multimodal monitoring of misleading content, and Infomerge’s end-to-end case management, including model-hallucination checks [524-549].
MAJOR DISCUSSION POINT
AI tools for regulatory enforcement
Argument 3
SEBI implements model‑hallucination checks and multi‑model verification to maintain reliable outputs (Avneesh Pandey)
EXPLANATION
Pandey notes that SEBI has built safeguards to detect and mitigate AI model hallucinations by cross‑validating outputs across multiple models.
EVIDENCE
He mentions that the system autonomously reads compliance documents, flags missing audit reports, and employs a three-architecture framework to catch hallucinations and ensure meaningful analysis [547-549].
MAJOR DISCUSSION POINT
AI reliability safeguards
AGREED WITH
Shri Ravi Agrawal, Professor Mausam, Ram Ganesh, Justice R. Mahadevan
J
Justice R. Mahadevan
2 arguments144 words per minute668 words276 seconds
Argument 1
The vote of thanks stresses that AI is now operational but must be governed ethically (Justice R. Mahadevan)
EXPLANATION
Justice Mahadevan acknowledges that AI has moved from aspiration to operational reality and emphasizes the need for ethical governance to sustain trust.
EVIDENCE
In his vote of thanks he states that AI is no longer aspirational, it is operational, and highlights the importance of ethical governance and accountability [617-624].
MAJOR DISCUSSION POINT
Ethical operational AI
AGREED WITH
Shri Ravi Agrawal, Professor Mausam, Ram Ganesh, Avneesh Pandey
Argument 2
Emphasis on trust‑based governance reinforces responsible AI use across agencies (Justice R. Mahadevan)
EXPLANATION
He reiterates that trust‑based governance is essential for responsible AI deployment across all regulatory and enforcement bodies.
EVIDENCE
He again underscores the strategic direction for AI-enabled trust-based governance and the need for ethical application across agencies [617-624].
MAJOR DISCUSSION POINT
Trust‑based AI governance
R
Ramesh Revuru
2 arguments145 words per minute421 words173 seconds
Argument 1
“Bharatverse” offers a pre‑built multi‑agent stack (foundational, data, knowledge, orchestration, consumption layers) for CBDT (Ramesh Revuru)
EXPLANATION
Revuru introduces Bharatverse, an Indianized version of Blueverse, providing a ready‑made multi‑agent platform with five pre‑built layers to accelerate AI deployment for the tax department.
EVIDENCE
He describes Blueverse as an agentic platform with foundational models, data, knowledge, orchestration, and consumption layers, and announces the Indianized Bharatverse tailored for CBDT [110-117].
MAJOR DISCUSSION POINT
Pre‑built AI platform
Argument 2
Deterministic “right‑action” agents guarantee reliable outcomes for tax administration (Ramesh Revuru)
EXPLANATION
Revuru argues that, unlike probabilistic generative AI, deterministic right‑action agents ensure consistent, reliable decisions in tax administration, reducing uncertainty.
EVIDENCE
He explains that generative AI is probabilistic, but for CBDT deterministic right-action agents are needed to guarantee correct outcomes in every scenario [118-124].
MAJOR DISCUSSION POINT
Deterministic AI for compliance
M
Martin Wilcox
2 arguments181 words per minute754 words249 seconds
Argument 1
“Bring‑Your‑Own‑Model” and in‑warehouse inference accelerate multimodal AI at scale, delivering 25× faster inference (Martin Wilcox)
EXPLANATION
Wilcox describes Teradata’s BYOM capability that allows models trained elsewhere to be deployed directly in the data warehouse, achieving up to 25‑fold speed improvements for inference at scale.
EVIDENCE
He presents a case where income-estimation models for a Brazilian credit union ran 25 times faster using in-warehouse inference, enabling hourly rather than daily scoring, and highlights multimodal data use [323-346].
MAJOR DISCUSSION POINT
Scalable AI inference
Argument 2
Graph‑based risk analytics uncover hidden financial‑crime networks at India‑scale (Martin Wilcox)
EXPLANATION
Wilcox points out that graph analytics, though computationally intensive, are essential for detecting complex networks of bad actors, and must be performed within the data warehouse to avoid sampling errors.
EVIDENCE
He notes that graph analytics are an O(N²) problem, requiring scalable systems that bring algorithms to the warehouse to prevent missing bad actors [318-322].
MAJOR DISCUSSION POINT
Graph analytics for fraud detection
P
Professor Mausam
2 arguments190 words per minute2446 words769 seconds
Argument 1
AI must guard against bias, over‑triggering and must retain a human‑in‑the‑loop to preserve trust (Professor Mausam)
EXPLANATION
Mausam warns that unchecked AI can produce biased outcomes and generate excessive alerts, eroding public trust, and stresses the necessity of human oversight and safeguards.
EVIDENCE
He discusses algorithmic bias, over-triggering leading to alert fatigue, the need for human-in-the-loop to maintain trust, and cites examples of bias against certain groups in judicial AI applications [291-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S28 and S29 provide guidance on mitigating algorithmic bias and ensuring human‑in‑the‑loop oversight; S30 highlights the need for human verification in critical AI outputs; S31 underscores data‑governance for trustworthy AI.
MAJOR DISCUSSION POINT
Bias mitigation and human oversight
AGREED WITH
Shri Ravi Agrawal, Ram Ganesh, Justice R. Mahadevan, Avneesh Pandey
Argument 2
AI can enhance policing through facial recognition, satellite imagery, traffic monitoring and anomaly detection (Professor Mausam)
EXPLANATION
Mausam outlines multiple law‑enforcement use cases for AI, including CCTV‑based crime reduction, satellite monitoring, traffic safety, and anomaly detection, illustrating the breadth of AI’s potential in public safety.
EVIDENCE
He cites examples such as 27 % crime reduction in Surat via CCTV and face-recognition, satellite imagery for port monitoring, traffic anomaly detection, and facial recognition for missing persons, among others [210-254].
MAJOR DISCUSSION POINT
AI‑enabled public safety
AGREED WITH
Martin Wilcox, Suvendu Pati, Ram Ganesh
R
Ram Ganesh
2 arguments158 words per minute812 words306 seconds
Argument 1
An AI co‑pilot ingests FIRs, generates compliant investigative paths and integrates open‑source and telecom data (Ram Ganesh)
EXPLANATION
Ganesh describes a co‑pilot system that automatically processes FIRs, creates investigation workflows aligned with legal standards, and pulls data from telecom and open‑source platforms to assist police investigations.
EVIDENCE
He explains that the co-pilot ingests FIRs, generates SOP-compliant investigative paths, sends routine legal requests, integrates telecom and open-source data, and has been used to train officers and win governance awards [464-483].
MAJOR DISCUSSION POINT
AI‑assisted investigative workflow
AGREED WITH
Martin Wilcox, Suvendu Pati, Professor Mausam
Argument 2
Human‑in‑the‑loop designs mitigate AI hallucinations and ensure accountability in investigations (Ram Ganesh)
EXPLANATION
Ganesh emphasizes that AI outputs must be reviewed by humans to prevent hallucinations and maintain accountability, especially in critical investigative contexts.
EVIDENCE
He lists large language models, graph neural networks, agentic AI, and big-data analytics, stressing that human-in-the-loop architecture ensures accountability and prevents AI errors [489-497].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S29 stresses keeping humanity in the loop to prevent hallucinations; S30 discusses risks of fully automated decisions; S31 advocates for embedded governance to ensure accountability.
MAJOR DISCUSSION POINT
Human oversight in AI investigations
AGREED WITH
Shri Ravi Agrawal, Professor Mausam, Justice R. Mahadevan, Avneesh Pandey
Agreements
Agreement Points
AI-driven enforcement and nudges markedly improve tax compliance, reduce disputes and generate substantial additional revenue
Speakers: Shri Ravi Agrawal, Abhishek Kumar, Shashi Bhushan Shukla, Justice R. Mahadevan
The new, technology‑driven Income Tax Act 2025, coupled with AI accountability, underpins trust‑based compliance (Shri Ravi Agrawal) AI can streamline the taxpayer lifecycle, tag legal issues, predict litigation vulnerability and boost compliance (Abhishek Kumar) Data‑driven “nudge” campaigns have yielded large revenue gains and improved voluntary compliance (Shashi Bhushan Shukla) The vote of thanks stresses that AI is now operational but must be governed ethically (Justice R. Mahadevan)
All four speakers highlight that AI tools-whether through the new tax act, lifecycle automation, behavioural nudges, or overall operational deployment-lead to easier compliance, fewer disputes and significant revenue gains [32-35][70-73][88-97][555-583][617-624].
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence from recent tax administration pilots shows AI-driven nudges increased filings and added over 8,800 crore in revenue, supporting the view that AI can boost compliance and reduce disputes [S48]; broader AI-driven enforcement frameworks are also highlighted as improving governance [S43].
Human oversight and ethical safeguards are essential for trustworthy AI deployment
Speakers: Shri Ravi Agrawal, Professor Mausam, Ram Ganesh, Justice R. Mahadevan, Avneesh Pandey
Ethical AI requires clear accountability, human oversight, safeguards and continuous training (Shri Ravi Agrawal) AI must guard against bias, over‑triggering and must retain a human‑in‑the‑loop to preserve trust (Professor Mausam) Human‑in‑the‑loop designs mitigate AI hallucinations and ensure accountability in investigations (Ram Ganesh) The vote of thanks stresses that AI is now operational but must be governed ethically (Justice R. Mahadevan) SEBI implements model‑hallucination checks and multi‑model verification to maintain reliable outputs (Avneesh Pandey)
These speakers converge on the need for human-in-the-loop, accountability mechanisms and bias mitigation to ensure AI remains trustworthy and ethical [36-44][291-304][489-497][617-624][547-549].
POLICY CONTEXT (KNOWLEDGE BASE)
The EU Ethics Guidelines for Trustworthy AI call for a human-centric approach, emphasizing human-in-the-loop oversight and ethical safeguards, which aligns with this agreement [S37]; similar recommendations appear in multiple governance discussions stressing human oversight [S38][S39][S40].
Building internal AI capacity and expertise is a priority for agencies
Speakers: Shri Ravi Agrawal, T. Srinivasan, Avneesh Pandey, Suvendu Pati
We need to be conscious of the fact that, okay, what are the pluses and the pitfalls. In adopting AI, we have to be conscious about it when we are adopting AI (Shri Ravi Agrawal) Sovereign, domain‑specific language models (SLMs) with LoRA adaptation provide secure, accurate tax‑service AI (T. Srinivasan) SEBI democratizes AI development, building internal capacity and empowering non‑IT teams (Avneesh Pandey) RBI’s AI sandbox and capacity‑building measures support responsible innovation (Suvendu Pati)
All four emphasize developing AI skills, internal teams and capacity-building programmes to sustain AI initiatives [45-52][129-154][504-513][389-393].
POLICY CONTEXT (KNOWLEDGE BASE)
Public sector capacity-building is recognised in policy guidance that internal learning and staff development should be funded from regular budgets to sustain AI expertise [S44].
Secure, sovereign data handling and robust data governance are critical for AI applications
Speakers: Shri Ravi Agrawal, T. Srinivasan, Suvendu Pati, Martin Wilcox
For law enforcement, this means we can strengthen how we prevent, detect, and respond, but only if we build the right preparedness and capacity building through high‑quality shareable data, secure systems, clear accountability, strong safeguards (Shri Ravi Agrawal) Sovereign, domain‑specific language models (SLMs) with LoRA adaptation provide secure, accurate tax‑service AI (T. Srinivasan) AI sandbox for cross‑sectoral data sharing while preserving privacy (Suvendu Pati) Graph analytics must be performed inside the data warehouse to avoid data leakage (Martin Wilcox)
These speakers agree that AI must operate on data that remains secure, sovereign and governed by clear policies, whether through shareable high-quality data, sovereign LLMs, sandbox environments or in-warehouse processing [36-44][129-154][389-393][318-322].
POLICY CONTEXT (KNOWLEDGE BASE)
Data sovereignty and robust governance are central to international policy debates, with calls to balance free data flow against security and localization requirements [S50][S51][S52][S57].
AI is a powerful tool for fraud detection, risk analytics and proactive enforcement across sectors
Speakers: Martin Wilcox, Suvendu Pati, Ram Ganesh, Professor Mausam
Graph‑based risk analytics uncover hidden financial‑crime networks at India scale (Martin Wilcox) “MuleHunter” AI detects mule accounts with >80‑90% accuracy across banks, enabling real‑time transaction scoring (Suvendu Pati) An AI co‑pilot ingests FIRs, generates compliant investigative paths and integrates open‑source and telecom data (Ram Ganesh) AI can enhance policing through facial recognition, satellite imagery, traffic monitoring and anomaly detection (Professor Mausam)
All four illustrate AI applications that identify fraudulent behaviour, map criminal networks and enable proactive risk-based actions, whether in banking, tax, policing or broader public safety [318-322][398-410][464-483][210-254].
POLICY CONTEXT (KNOWLEDGE BASE)
AI-enabled fraud detection and risk analytics are promoted in AI-driven enforcement strategies that leverage graph analytics to uncover hidden connections, underscoring its role across sectors [S43].
Trust‑based governance is essential for AI adoption in public administration
Speakers: Shri Ravi Agrawal, Suvendu Pati, Justice R. Mahadevan
The new Income Tax Act 2025 creates a technology‑driven ecosystem that fosters trust‑based compliance (Shri Ravi Agrawal) Trust in the system is a foundational principle for AI adoption (Suvendu Pati) The vote of thanks stresses that AI is operational but must be governed ethically to sustain trust (Justice R. Mahadevan)
These speakers underline that building trust-through transparent technology, principled frameworks and ethical governance-is pivotal for AI’s success in governance [32-35][380-383][617-624].
POLICY CONTEXT (KNOWLEDGE BASE)
Trust-based governance is highlighted in regulatory discussions that favour targeted trust and safety interventions over blanket legislation, emphasizing trust as a driver of innovation [S45].
Similar Viewpoints
Both emphasize that AI must be embedded within a trustworthy, ethical framework to achieve effective tax compliance [32-35][617-624].
Speakers: Shri Ravi Agrawal, Justice R. Mahadevan
The new, technology‑driven Income Tax Act 2025, coupled with AI accountability, underpins trust‑based compliance (Shri Ravi Agrawal) The vote of thanks stresses that AI is now operational but must be governed ethically (Justice R. Mahadevan)
Both stress the importance of keeping data sovereign and secure, either via on‑premise models or sandbox environments, to enable responsible AI use [129-154][389-393].
Speakers: T. Srinivasan, Suvendu Pati
Sovereign, domain‑specific language models (SLMs) with LoRA adaptation provide secure, accurate tax‑service AI (T. Srinivasan) AI sandbox for cross‑sectoral data sharing while preserving privacy (Suvendu Pati)
Both present AI‑driven graph and machine‑learning analytics as essential for large‑scale fraud detection in financial systems [318-322][398-410].
Speakers: Martin Wilcox, Suvendu Pati
Graph‑based risk analytics uncover hidden financial‑crime networks at India scale (Martin Wilcox) “MuleHunter” AI detects mule accounts with >80‑90% accuracy across banks, enabling real‑time transaction scoring (Suvendu Pati)
Both underline that human oversight is indispensable to prevent bias, errors and loss of trust in AI‑assisted enforcement [291-304][489-497].
Speakers: Professor Mausam, Ram Ganesh
AI must guard against bias, over‑triggering and must retain a human‑in‑the‑loop to preserve trust (Professor Mausam) Human‑in‑the‑loop designs mitigate AI hallucinations and ensure accountability in investigations (Ram Ganesh)
Unexpected Consensus
AI is positioned as a tool for both enforcement efficiency and broader public welfare/happiness
Speakers: Shri Ravi Agrawal, Professor Mausam
Good evening… Welfare for All, Happiness for All… (Shri Ravi Agrawal) AI can enhance policing through facial recognition, satellite imagery, traffic monitoring and anomaly detection (Professor Mausam)
While Ravi Agrawal frames AI within the tax administration’s welfare-centric agenda, Professor Mausam extends AI’s role to public safety and societal well-being, showing an unexpected cross-sector consensus that AI should serve broader welfare goals beyond pure enforcement [26-29][210-254].
POLICY CONTEXT (KNOWLEDGE BASE)
Summits on AI for public good frame AI as a means to achieve welfare and happiness for all, linking enforcement efficiency with broader societal wellbeing [S60][S61].
Overall Assessment

The symposium revealed strong consensus among policymakers, technologists and regulators that AI can dramatically improve tax compliance, fraud detection and public welfare, provided it is deployed with robust human oversight, secure data governance and capacity‑building measures. Ethical, trust‑based frameworks and capacity development were repeatedly highlighted as prerequisites.

High consensus – the convergence across diverse speakers on the benefits, safeguards and governance of AI suggests a unified strategic direction for AI‑enabled tax administration and law‑enforcement, paving the way for coordinated policy implementation and cross‑agency collaboration.

Differences
Different Viewpoints
Deterministic versus probabilistic AI for tax administration
Speakers: Ramesh Revuru, Professor Mausam, Shri Ravi Agrawal
“Deterministic ‘right‑action’ agents guarantee reliable outcomes for tax administration (Ramesh Revuru)” “AI must guard against bias, over‑triggering and must retain a human‑in‑the‑loop to preserve trust (Professor Mausam)” “Ethical AI requires clear accountability, human oversight, safeguards and continuous training (Shri Ravi Agrawal)”
Revuru argues that generative AI’s probabilistic nature is unsuitable for the CBDT and insists on deterministic ‘right-action’ agents that guarantee correct outcomes in every scenario [118-124]. Mausam and Agrawal accept that AI models are probabilistic but stress that human oversight, accountability and safeguards can mitigate risks, allowing such models to be used for enforcement and compliance [291-304][36-44]. This creates a clear split between a deterministic-only stance and a risk-managed probabilistic stance.
POLICY CONTEXT (KNOWLEDGE BASE)
Panel discussions have shifted from seeking perfect deterministic systems to accepting probabilistic AI while retaining human agency, reflecting a policy trend toward pragmatic guardrails [S46]; similar viewpoints note the mismatch between probabilistic AI behavior and deterministic compliance needs [S47].
Centralised data repositories versus data‑locality and privacy concerns
Speakers: Professor Mausam, Suvendu Pati
“It is important that we gather data in a centralized repository (Professor Mausam)” “MuleHunter data stays within each bank; only aggregated insights are shared centrally, data does not go out of the bank (Suvendu Pati)”
Mausam advocates a single, centralized data store to enable AI-driven intelligence across agencies, while acknowledging civil-liberty concerns [304-306]. Pati emphasizes that raw data must remain inside each bank, with only aggregated features shared, to protect privacy and institutional boundaries [406-408]. The tension lies between the desire for a unified data pool and the need to keep data siloed for security and privacy.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between centralized data warehouses and data-locality is reflected in ongoing policy debates on cross-border data flows, localization, and privacy, with multiple reports urging a careful balance [S50][S51][S52][S53].
Model development and deployment strategy – sovereign in‑house LLMs versus portable BYOM in‑warehouse inference
Speakers: T. Srinivasan, Martin Wilcox
“Sovereign, domain‑specific language models (SLMs) with LoRA adaptation provide secure, accurate tax‑service AI (T. Srinivasan)” “‘Bring‑Your‑Own‑Model’ and in‑warehouse inference accelerate multimodal AI at scale, delivering 25× faster inference (Martin Wilcox)”
Srinivasan proposes building a sovereign, tax-domain LLM that stays within the department, using LoRA to fine-tune with minimal cost and ensuring data never leaves the agency [129-154]. Wilcox promotes a model-agnostic BYOM approach that brings externally trained models into the data warehouse for high-performance inference, emphasizing scalability and multimodality [323-346]. Both aim for efficient AI but diverge on whether models should be internally owned or externally imported.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on sovereign versus portable AI models cite arguments for domain-specific in-house LLMs with full data control versus scalable BYOM approaches in shared warehouses, mirroring positions expressed in AI-driven enforcement forums [S43] and sovereign AI defence discussions [S58].
Enforcement‑centric AI versus nudges‑centric AI for taxpayer compliance
Speakers: Shri Ravi Agrawal, Shashi Bhushan Shukla
“AI has the potential to transform every sector… For law enforcement, this means we can strengthen how we prevent, detect, and respond…” (Shri Ravi Agrawal) “Data‑driven ‘nudge’ campaigns have yielded large revenue gains and improved voluntary compliance (Shashi Bhushan Shukla)”
Agrawal focuses on AI-driven enforcement tools-risk scoring, litigation risk assessment, and rule-based detection-to reduce disputes and increase compliance [32-35][36-44]. Shukla highlights AI-enabled behavioural nudges that inform taxpayers and encourage voluntary compliance, reporting substantial revenue gains from such nudges [555-583]. Both seek higher compliance but differ on whether a coercive enforcement model or a persuasive nudging model should be primary.
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence from tax nudging pilots demonstrates the impact of behavioral nudges, while enforcement-centric AI is promoted for graph-based fraud detection, illustrating the trade-off between enforcement and nudging strategies [S48][S43].
Unexpected Differences
Deterministic‑only AI stance versus acceptance of probabilistic models
Speakers: Ramesh Revuru, Professor Mausam, Shri Ravi Agrawal
“Deterministic ‘right‑action’ agents guarantee reliable outcomes for tax administration (Ramesh Revuru)” “AI must guard against bias, over‑triggering and must retain a human‑in‑the‑loop to preserve trust (Professor Mausam)” “Ethical AI requires clear accountability, human oversight, safeguards and continuous training (Shri Ravi Agrawal)”
Revuru’s insistence on fully deterministic agents is unexpected in a forum where most participants accept probabilistic AI as long as safeguards are in place. This creates a rare clash over the fundamental nature of AI acceptable for public‑sector use.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions emphasize moving from a deterministic-only mindset toward accepting probabilistic AI with human oversight, highlighting the need for guardrails that accommodate uncertainty [S46][S47].
Sovereign in‑house LLMs versus portable BYOM models
Speakers: T. Srinivasan, Martin Wilcox
“Sovereign, domain‑specific language models (SLMs) with LoRA adaptation provide secure, accurate tax‑service AI (T. Srinivasan)” “‘Bring‑Your‑Own‑Model’ and in‑warehouse inference accelerate multimodal AI at scale, delivering 25× faster inference (Martin Wilcox)”
The contrast between building a closed, department‑owned LLM and importing external models for in‑warehouse inference was not anticipated given the shared goal of scalable AI. It reveals differing philosophies on model ownership, data sovereignty and ecosystem openness.
POLICY CONTEXT (KNOWLEDGE BASE)
The sovereign versus portable LLM debate is reflected in calls for domain-specific, fully controlled models contrasted with BYOM solutions that leverage shared inference infrastructure, as outlined in AI-driven enforcement and sovereign AI policy dialogues [S43][S58].
Overall Assessment

The symposium displayed broad consensus that AI is essential for modern tax administration and law‑enforcement, yet substantive disagreements emerged around the technical nature of AI (deterministic vs probabilistic), data architecture (centralised vs data‑local), model strategy (in‑house sovereign LLMs vs BYOM), and the preferred compliance approach (enforcement‑centric risk scoring vs voluntary nudges).

Moderate to high. While participants share overarching goals of improved compliance, revenue generation and ethical AI, the divergent views on core technical and governance choices could affect policy alignment, implementation timelines and inter‑agency coordination. Resolving these differences will be crucial for coherent, nation‑wide AI deployment in the tax and enforcement ecosystem.

Partial Agreements
All speakers agree that AI should be deployed to improve tax compliance, reduce fraud and increase revenue. However, they diverge on the primary mechanism: Agrawal and Wilcox stress risk‑based scoring and graph analytics for enforcement; Shukla emphasizes behavioural nudges; Pati focuses on a specific fraud‑detection tool (MuleHunter). The shared goal of better compliance is clear, but the pathways differ [32-35][36-44][555-583][398-410][318-322].
Speakers: Shri Ravi Agrawal, Shashi Bhushan Shukla, Suvendu Pati, Martin Wilcox
“AI has the potential to transform every sector…” (Shri Ravi Agrawal) “Data‑driven ‘nudge’ campaigns have yielded large revenue gains…” (Shashi Bhushan Shukla) “MuleHunter AI detects mule accounts with >80‑90% accuracy…” (Suvendu Pati) “Graph‑based risk analytics uncover hidden financial‑crime networks at India scale…” (Martin Wilcox)
All three stress the need for capacity building and responsible AI deployment within their organisations. Agrawal calls for accountability and training; Srinivasan proposes sovereign models to keep data secure; Pandey describes democratising AI across non‑IT staff. They share the objective of building internal AI competence but propose different routes—policy‑driven safeguards, technical model sovereignty, and organisational democratisation.
Speakers: Shri Ravi Agrawal, T. Srinivasan, Avneesh Pandey
“Ethical AI requires clear accountability, human oversight, safeguards and continuous training (Shri Ravi Agrawal)” “Sovereign, domain‑specific language models (SLMs) with LoRA adaptation provide secure, accurate tax‑service AI (T. Srinivasan)” “SEBI democratizes AI development, building internal capacity and empowering non‑IT teams (Avneesh Pandey)”
Takeaways
Key takeaways
AI is moving from aspirational to operational across tax administration and law‑enforcement agencies. Domain‑specific, sovereign language models (SLMs) with LoRA adaptation can provide secure, multilingual, low‑cost AI for tax services. AI‑driven “nudge” campaigns have generated significant additional revenue and improved voluntary compliance. The new Income Tax Act 2025 creates a technology‑driven, rule‑based ecosystem that reduces interpretative ambiguity and supports AI integration. Ethical AI governance—human‑in‑the‑loop, accountability, bias mitigation, and continuous training—is essential for trust‑based compliance. RBI’s seven AI sutras, sandbox framework and risk‑mitigation pillars provide a national blueprint for responsible AI innovation. Multimodal analytics, graph‑based risk scoring, and in‑warehouse model inference (e.g., Teradata’s BYOM) enable scalable detection of financial crime. Practical platforms such as Bharatverse (pre‑built multi‑agent stack) and MuleHunter demonstrate concrete AI applications in tax fraud detection and banking. Collaboration across agencies (CBDT, RBI, SEBI, police, cyber‑security) and with industry/academia is critical for data sharing, capacity building, and unified AI standards.
Resolutions and action items
Scale up Project Insight 2.0 and related AI modules (risk scoring, conversational assistants, litigation‑risk prediction) across the Income Tax Department. Deploy the sovereign SLM architecture with LoRA adaptation for tax‑specific AI services, ensuring multilingual support and data sovereignty. Expand the “Nudge” (Saksham) framework to cover additional taxpayer segments and integrate real‑time prompts at filing. Operationalise RBI’s AI sandbox and adopt the seven AI sutras as the governing framework for all financial‑sector AI projects. Continue development and rollout of MuleHunter across more banks, moving from post‑transaction detection to real‑time transaction scoring. Implement SEBI’s democratised AI development model, encouraging non‑IT staff to build and maintain AI tools (RIDAR, Sudarshan, Infomerge). Establish cross‑agency data repositories and standardised APIs to enable unified intelligence (as advocated by Professor Mausam). Introduce systematic human‑in‑the‑loop validation checkpoints for all AI‑driven enforcement tools to mitigate hallucinations and bias. Create a joint inter‑agency task‑force to monitor AI misuse (synthetic identities, deepfakes) and share mitigation strategies internationally.
Unresolved issues
How to systematically detect and eliminate algorithmic bias in AI models used for enforcement and compliance. Mechanisms for safeguarding civil liberties and privacy when aggregating multimodal data (visual, speech, financial) across agencies. Standardised protocols for data sharing between disparate government bodies while maintaining data sovereignty. Scalable training programmes to up‑skill large numbers of officials across tax, police, and regulatory agencies. Defining clear liability and accountability frameworks for AI‑generated decisions that may lead to adverse outcomes. Ensuring AI systems remain robust against adversarial attacks and synthetic‑identity fraud beyond current pilot phases.
Suggested compromises
Adopt a phased AI rollout with sandbox testing before full production deployment, balancing innovation with risk mitigation. Maintain deterministic “right‑action” agents for critical tax decisions while allowing probabilistic models for exploratory analytics. Combine AI automation with human oversight (human‑in‑the‑loop) to retain accountability and public trust. Use a hybrid due‑diligence approach for mule‑account detection—AI flags high‑risk cases, human investigators confirm before action. Implement multi‑model verification (as SEBI does) to counteract hallucinations and ensure consistent outputs across AI tools.
Thought Provoking Comments
I built an app in five to six hours using AI code generation – a task that would have taken months. But I cannot blindly rely on it; the human must drive the AI rather than the AI driving the human.
Illustrates the practical power of generative AI while emphasizing the critical need for human oversight, setting a realistic tone for AI adoption in governance.
Shifted the discussion from speculative enthusiasm to a balanced view of AI capabilities and responsibilities, prompting subsequent speakers to address governance, accountability, and the human‑in‑the‑loop principle.
Speaker: Shri Ravi Agrawal
In the context of CBDT we cannot have something probabilistic. We need deterministic ‘right action’ – a platform that guarantees the correct outcome in every scenario.
Introduces the concept of deterministic AI for tax enforcement, challenging the common perception that AI must be probabilistic and highlighting the need for certainty in public administration.
Led to deeper technical discussions about building sovereign LLMs (by T. Srinivasan) and reinforced the importance of reliability, influencing the audience to consider stricter validation and deterministic design.
Speaker: Shri Ramesh Revuru
If AI starts to reach out to citizens directly and makes mistakes, we will lose trust. Over‑triggering leads to alert fatigue, and algorithmic bias can devastate a diverse society. Human‑in‑the‑loop is essential.
Raises ethical and practical risks of AI deployment—bias, over‑alerting, loss of public trust—providing a cautionary counterbalance to the optimism of earlier speakers.
Prompted other participants (e.g., Ram Ganesh, Avneesh Pandey) to stress human oversight and bias mitigation in their solutions, and shaped the concluding remarks to emphasize responsible AI.
Speaker: Professor Mausam
The RBI’s AI governance sutras have been adopted by the Government of India as the national AI principles, and our MuleHunter model now achieves 80‑90% accuracy, moving from rule‑based to AI‑driven detection.
Shows a concrete policy outcome—national AI principles—and demonstrates measurable success of AI in financial crime detection, linking governance with technical results.
Validated the practical impact of AI in enforcement, encouraging other agencies (e.g., SEBI, police) to adopt similar frameworks and reinforcing the theme of cross‑sectoral collaboration.
Speaker: Suvendu Pati
Our ‘Bring Your Own Model’ capability lets us import models trained anywhere and run inference 25 times faster on the data warehouse, turning model training into real‑time production value.
Highlights a shift from model training hype to production‑oriented AI, emphasizing speed, scalability, and the importance of inference at scale for enforcement.
Steered the conversation toward operational deployment challenges (e.g., graph analytics O(N²) mentioned later) and underscored the need for infrastructure that supports high‑throughput inference.
Speaker: Martin Wilcox
Our AI co‑pilot ingests the FIR, generates a compliant investigative path, pulls telecom and open‑source data, and automates routine legal requests, reducing investigation time and improving consistency.
Provides a concrete end‑to‑end AI workflow for police investigations, illustrating how AI can augment procedural compliance and operational efficiency.
Expanded the discussion from tax‑focused AI to broader law‑enforcement applications, reinforcing the multi‑modal data theme and prompting other speakers to reference similar integration (e.g., satellite imagery, multimodal analytics).
Speaker: Ram Ganesh
We have democratized AI development at SEBI – tools like RIDAR, Sudarshan, and Infomerge are built by analysts, not just the IT department, and we use model‑hallucination checks to ensure reliability.
Shows organizational cultural change—empowering domain experts to create AI solutions—and introduces technical safeguards against model errors, addressing concerns raised earlier about bias and trust.
Inspired other agencies to consider internal capacity building and highlighted the importance of cross‑functional AI teams, influencing the later emphasis on training and human‑in‑the‑loop.
Speaker: Avneesh Pandey
Our Nudge initiative, using a seven‑step Saksham strategy, has led to 1.57 lakh taxpayers disclosing foreign assets worth ₹99,000 crore and recovered ₹6,540 crore in tax, demonstrating AI‑driven behavioral change at scale.
Provides quantifiable evidence of AI’s impact on compliance and revenue, moving the conversation from theory to measurable outcomes.
Reinforced the narrative that AI can improve voluntary compliance, prompting the final vote‑of‑thanks to highlight these results as a success story and setting a benchmark for other agencies.
Speaker: Shashi Bhushan Shukla
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from high‑level enthusiasm to concrete, responsible, and results‑driven AI deployment. Ravi Agrawal’s opening anecdote set a balanced tone, which was deepened by Professor Mausam’s caution on bias and trust. Technical challenges and solutions were introduced by Ramesh Revuru and T. Srinivasan, while Martin Wilcox and Suvendu Pati provided evidence of scalable, production‑grade AI and policy integration. Operational examples from Ram Ganesh, Avneesh Pandey, and Shashi Bhushan Shukla demonstrated real‑world impact across tax, securities, and policing. Collectively, these comments redirected the dialogue toward deterministic, human‑centered, and measurable AI applications, establishing a clear roadmap for cross‑sectoral collaboration and responsible governance.

Follow-up Questions
How to ensure AI bias does not creep into models used for law enforcement and tax enforcement?
Concern about algorithmic bias affecting diverse society and undermining trust.
Speaker: Professor Mausam
How to establish a centralized data repository for cross‑agency intelligence while respecting privacy?
Need for data sharing across government agencies to improve intelligence without violating privacy.
Speaker: Professor Mausam
How to maintain human‑in‑the‑loop oversight to preserve trust in AI‑driven enforcement?
Prevent autonomous AI errors and retain accountability in decision‑making.
Speaker: Professor Mausam
How to balance AI‑driven surveillance with civil liberties and privacy protections?
Increased surveillance must not infringe on citizens’ rights.
Speaker: Professor Mausam
How to develop and implement an AI sandbox for experimentation while ensuring security and compliance?
Sandbox needed to allow entities to test AI despite compute and data constraints.
Speaker: Suvendu Pati
How to scale graph analytics for India‑scale data without performance bottlenecks?
Graph analytics are O(N²); need scalable, high‑performance solutions for nationwide deployment.
Speaker: Martin Wilcox
How to integrate multimodal data (images, audio, text) effectively for financial crime detection?
Leveraging unstructured data alongside structured transaction data can improve detection of complex fraud.
Speaker: Martin Wilcox
How to implement real‑time transaction scoring for digital payments to prevent mule accounts?
Future digital payments intelligence platform aims to score transactions at the moment they occur.
Speaker: Suvendu Pati
How to democratize AI development within regulatory bodies and ensure capacity building?
Building AI skills across the organization, not just in IT, is essential for sustainable adoption.
Speaker: Avneesh Pandey
How to detect and mitigate synthetic‑identity and deep‑fake document misuse across law enforcement?
Emerging AI‑assisted fraud poses new threats that require dedicated detection mechanisms.
Speaker: Shashi Bhushan Shukla
How to create collaborative mechanisms for LEAs to share AI misuse incidents globally?
A coordinated approach is needed to address cross‑border AI‑enabled threats.
Speaker: Shashi Bhushan Shukla
How to ensure explainability and fairness in AI models used for tax compliance?
Accountability, transparency, and fairness are critical for public acceptance of AI‑driven enforcement.
Speaker: Ravi Agrawal
How to train domain‑specific sovereign LLMs with limited data while preserving data security?
Use of LoRA and controlled data pipelines to create tax‑focused models without exposing data.
Speaker: T. Srinivasan
How to evaluate the effectiveness of AI‑driven nudges on taxpayer behavior and compliance outcomes?
Measuring impact of nudges (e.g., foreign asset disclosures) is needed to refine strategies.
Speaker: Shashi Bhushan Shukla
How to build continuous capacity and training programs for AI adoption across the Income Tax Department?
Sustained human expertise is required so that humans drive AI rather than being driven by it.
Speaker: Ravi Agrawal

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Agents of Change AI for Government Services & Climate Resilience

Agents of Change AI for Government Services & Climate Resilience

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened with Minister Sridhar Babu outlining a shift from generative AI to “agentic AI” that can act autonomously, positioning AI as a form of public infrastructure and a co-governor for tasks such as flood prediction, agricultural advice, climate monitoring and health risk anticipation [22-23][40-41][45-48][52-54][57-62][80-84]. He cited concrete initiatives-including a Telugu-language AI for land-record management, satellite-driven heat-zoning for Hyderabad, solar-edge computing nodes for resilient services, a sovereign AI nerve centre (ICOM) and an open data-exchange platform that grounds intelligence in integrity-to illustrate how AI is being embedded in governance [57-64][72-77].


Panelists concurred that the most significant change is the move from narrow, task-specific tools to end-to-end, systems-level agentic AI capable of acting on behalf of users [110-113][115-117][119-122][124-125]. Srini Tallapragada described an AI agent as needing a defined role, knowledge, short- and long-term memory, and the ability to act via APIs across channels, while emphasizing guardrails, auditability and a “trust layer” to curb hallucinations and bias [133-144][145-148]. Saibal Chakraborty raised governance concerns, questioning how high-stakes public-sector processes such as multi-million-dollar RFPs can be safely automated and whether a human-in-the-loop remains essential [146-154]. Lee Tiedrich warned of over-reliance, urging careful use-case selection, sandboxing of immature applications, and clear liability and interoperability rules for third-party agents [190-203]. Mike Haley highlighted the probabilistic nature of AI, arguing that transparency, human override and continuous feedback are vital for trust [217-231].


Srini distinguished strategic sovereignty (control over data and policy) from technical sovereignty (control over the hardware supply chain), urging governments to secure data locally while pursuing longer-term technical independence [246-251]. Both Saibal and Lee stressed the need for upskilling public officials and for globally coordinated, adaptable standards and evaluation frameworks to keep pace with rapid AI advances [253-254][260-279].


Looking ahead, panelists suggested success would be measured by vernacular AI tools that give farmers actionable advice, faster and safer infrastructure delivery, and measurable income gains for the lowest-income quintile [334-335][345-352][355-356]. They concluded that iterative, “agile” regulation, continuous feedback loops, and collaboration between policymakers, engineers and standards bodies are essential to harness AI agents responsibly [313-328][260-279]. With robust guardrails and inclusive policies, AI agents can transform governance into a proactive, resilient system that serves citizens more effectively [90-92].


Keypoints


Major discussion points


The emergence of “agentic” AI as a paradigm shift – Speakers repeatedly noted that the field has moved from isolated, query-based models to autonomous agents that can act on behalf of users and organisations. The Minister described moving “from generative AI that simply answers … to agentic AI that acts now” [22-24]; Saibal called it “the single biggest change … end-to-end AI-led execution of business processes” [110-113]; Lee highlighted the same shift as “the emergence of agentic AI … ability … to act on behalf of people” [115-117]; Mike added that agents now “abstract the problem, chain of thought reasoning … turn … into sequenced action” [119-122]; and Srini summed it up as moving “from co-pilot human in the loop to agents which can act and really provide value” [124-125].


AI agents as tools for public-sector transformation – The Minister gave concrete examples of how agents are being embedded in Telangana’s governance: predictive flood-warning for the “Moosey” river [45-48], AI-driven agricultural advisories that learn from farmer dialects [50-54], satellite-based land-record and climate-event compression [57-60], AI-guided urban cooling and solar edge-computing for resilience [61-64], and a sovereign “AI nerve centre” with an open data exchange that powers health-risk anticipation and climate-ready services [70-84][90-92].


Guardrails, trust, and human-in-the-loop are essential – Multiple panelists stressed that autonomous agents must be bounded by clear policies, auditability, and transparency. Srini listed the required components of a trustworthy agent – role definition, knowledge, memory, API access, surface-channel support, and “guardrails … a trust layer” [136-141][144-148]; Lee warned about “over-reliance” and the need for sandboxing, liability frameworks, and interoperability safeguards [190-203]; Mike argued that because agents are probabilistic, “trust actually depends on transparency and understanding and then the ability to come in and control something” [216-231]; Saibal highlighted the high stakes of public-procurement RFP generation and the need for human review [146-154]; and Saibal (later) noted up-skilling of public-sector staff as a critical guardrail [253-254].


Data sovereignty and strategic vs. technical control – The Minister introduced a “sovereign AI nerve centre” and a “Telangana data exchange platform” that keeps data within the state and enables AI-driven policy [73-78]; Srini expanded the concept into two layers of sovereignty – strategic (control of data, policies, human-in-the-loop) and technical (control of the full hardware-software supply chain) [246-251], urging governments to pursue the easier strategic track now while planning for the longer-term technical track.


Concrete use-cases and the gap between vision and implementation – Panelists discussed where agents are already delivering value and where challenges remain. Mike described agents that analyse floodplains and optimise drainage in infrastructure design [169-174]; Srini gave disaster-response bots (e.g., “Bobby” in the UK, “Terry” for police in Tasmania) that free up human time [185-188]; and both acknowledged that many projects are still pilots, with the real breakthrough coming from the underlying architecture that ties them together [68-70][180-188].


Overall purpose / goal of the discussion


The session was convened to explore how AI agents can become a “force multiplier” for a “Better Tomorrow”, focusing on (1) the technological shift to agentic AI, (2) its potential to reshape public-sector services and infrastructure, (3) the policy, governance, and trust frameworks needed to deploy it responsibly, and (4) concrete pathways for governments-especially in the Global South-to adopt and scale these capabilities while safeguarding sovereignty and equity.


Tone of the discussion


Opening (Minister’s remarks) – Visionary and celebratory, emphasizing a historic inflection point and the promise of AI-driven governance [16-24][40-43].


Panel exchange on the shift to agentic AI – Optimistic and forward-looking, with excitement about new capabilities [110-122][124-125].


Mid-session (use-case sharing & sovereignty) – Pragmatic and demonstrative, citing real pilots and concrete state initiatives [45-64][70-84][73-78].


Guardrails segment – Cautious and measured, highlighting risks, the need for transparency, human oversight, and up-skilling [190-203][216-231][253-254].


Closing reflections – Hopeful yet grounded, focusing on measurable outcomes (farmers’ access, infrastructure speed, income uplift) and the importance of iterative, “agile” regulation [334-335][339-343][345-350][355-357].


Overall, the conversation moved from high-level enthusiasm to a balanced mix of optimism, practical illustration, and sober acknowledgement of the governance challenges that must be addressed for AI agents to deliver public value.


Speakers

Victoria Espinel – Panel moderator and discussion facilitator; thanked the Salesforce team, indicating affiliation with Salesforce [S12].


Minister Sridhar Babu – Minister of Telangana, Government of India; speaks on AI policy, governance, and public-sector AI initiatives [S3].


Lee Tiedrich – Professor and AI-safety researcher; contributed to the International AI Safety Report [S1][S2].


Mike Haley – Executive/Engineer at Autodesk; works on AI for infrastructure, digital twins, BIM, and AI-agent development [S5].


Saibal Chakraborty – Managing Director and Senior Partner, Boston Consulting Group; focuses on AI strategy and public-sector consulting [S7].


Srinivas Tallapragada – Engineering leader (likely at Salesforce); leads AI platform engineering and AI-agent solutions for government services [S10][S11].


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

1. Opening – Victoria Espinel opened the session by welcoming a “very special guest”, Minister Sridhar Babu, and invited him to the podium for a keynote address [1-6].


2. Minister Babu’s keynote


* He framed the moment as a historic inflection point for governance and introduced AI’s “three-life” view: the first life in research labs, the second in policy papers, and the third in how it affects the lives of everybody [55-58].


* He argued that the era of “generative AI that simply answers” is giving way to “agentic AI that acts now” [22-24].


* AI was positioned as public infrastructure and a “co-governor” for critical functions such as flood prediction on the Moosy river in Hyderabad [45-48].


* He described agents as teammates, pilots and co-pilots that work alongside human operators [48-50].


* Concrete Telangana pilots were highlighted: a Telugu-language AI that records land records [57-60]; satellite-driven heat-zoning that will inform urban cooling and green-belt planning for Hyderabad by 2035 [61-62]; solar-powered edge-computing nodes that keep services running when the grid fails [63-64]; and a sovereign “AI nerve centre” (ICOM) coupled with an open data-exchange platform that currently holds 1,084 data sets, transformed from “administrative exhaust to ecological signal” [70-77][73-76].


* Anticipatory services were showcased: health-risk alerts [80-84]; AI that forecasts heat waves, prepares shade corridors, and gives farmers assurance before loss [85-88].


* He announced two upcoming projects – an “AI city” and a “Bharat future city” that will be net-zero, self-learning, and provide its own compute and policy advisors [68-71].


* The minister concluded by emphasizing the need for a sovereign data strategy and thanked Salesforce for its partnership [72-78].


3. Panel ice-breaker – Victoria asked the panel to define the paradigm shift.


* Saibal Chakraborty declared that the conversation has moved “decisively towards agentic AI… end-to-end AI-led execution of business processes or government processes” [110-113].


* Lee Tiedrich echoed the emergence of agents that can act on behalf of people [115-117].


* Mike Haley added that agents now “abstract the problem, chain of thought reasoning… turn… into sequenced action”, moving from task-specific tools to systems-level orchestration [119-122].


* Srinivas Tallapragada summed up the shift as moving “from co-pilot human in the loop to agents which can act and really provide value” [124-125].


4. Defining a trustworthy AI agent – Srinivas outlined the essential components: a clearly defined role, domain knowledge, short- and long-term memory, the ability to act via APIs across channels (e.g., WhatsApp, web, SMS), and, crucially, guardrails forming a “trust layer” that prevents hallucinations, bias and unpredictability while a command-centre architecture provides auditability and independent testing [133-148][214-215].


5. Governance & guardrails


* Saibal warned that high-stakes tasks such as drafting multi-million-dollar RFPs demand strict guardrails and a final human review to avoid costly errors [146-154].


* Lee cautioned against over-reliance, urging careful use-case selection, sandboxing of immature applications, and clear liability frameworks for third-party agents [190-203].


* Mike emphasized the probabilistic nature of AI and advocated “nutrition-label” transparency cards that disclose model type, training data, accuracy and known bias, arguing that trust derives from transparency and the ability for humans to intervene [217-304].


* Saibal highlighted up-skilling public-sector staff as perhaps the most important guardrail, enabling officials to recognise when AI outputs need human verification [253-254].


6. Sovereignty – Srinivas distinguished strategic sovereignty (immediate control over data, policies and human-in-the-loop mechanisms) from technical sovereignty (full control of hardware and supply-chain). He urged governments to pursue strategic data control now while planning for technical sovereignty in the longer term [246-251].


7. Use-case highlights


* Mike described AI agents that analyse floodplains and optimise drainage, noting that such “small” design tasks are massive components of resilient infrastructure [169-174].


* Srinivas shared disaster-response bots – “Bobby” in the UK and “Terry” for police in Tasmania – that field non-emergency queries, freeing human operators for higher-value work [185-188].


* Both noted that many projects remain pilots and that the real breakthrough will come from the underlying architecture that integrates them [68-70][180-188].


8. Infrastructure prerequisite – Mike stressed that AI cannot deliver value without a solid digital foundation: building information modelling (BIM), standardized data assets and accurate digital twins are prerequisites to any AI-driven design effort, linking this need to India’s “2047 initiative” for massive infrastructure development [238-244].


9. Standards & agile regulation


* Lee called for a globally coordinated, multidisciplinary evaluation ecosystem, localisation for cultural and legal contexts, and a pipeline that translates standards into regulation [260-279].


* Srinivas proposed an agile, feedback-loop-driven policy framework that can evolve rapidly as technology advances [317-325].


10. Future success metrics


* Saibal: AI success will be evident when a farmer can converse in his or her own language with a small-language model and receive practical advice [334-335].


* Lee: the establishment of active AI-safety evaluation institutes that share techniques globally, especially in the Global South [339-343].


* Mike: infrastructure will be built faster than ever before, yet safely and with public confidence [345-352].


* Srinivas: a measurable uplift in per-capita income for the bottom 50 % of the population will be the ultimate indicator of AI’s inclusive impact [355-356].


11. Closing – Victoria thanked the participants, noted the optimism and pragmatic focus of the discussion, and highlighted the shared belief that, with appropriate guardrails, sovereign data strategies and inclusive capacity-building, AI agents can transform governance into a proactive, resilient system that delivers tangible benefits to citizens [90-92][331-336].


Session transcriptComplete transcript of the session
Victoria Espinel

We are going to start with a very special guest. Minister Bawu is going to join us for a keynote. Very excited to hear what you have to say, coming from Hyderabad, one of the centers of technology in India and in the world. So, Minister, thank you so much for joining us. And if I could ask you to come to the podium. Thank you so much, Minister.

Minister Sridhar Babu

Very good afternoon to all. In fact, we welcome you to our city of Delhi, a beautiful city, a capital of India. And many people are from India, too. And we welcome the distinguished panelists. eminent panelists who are sitting here to sit and discuss the quotes for Better Tomorrow. And I welcome the leaders of the industry and the delegates over here. And especially coming to the subject, AI agents for Better Tomorrow. You know, I wish to see that, you know, where we stand today. And where we would end up tomorrow. The point of discussion over here. We stand today at a fundamental inflection point in the history of governance. As a policymaker, I would like to mention a few points.

Because all the technocrats are sitting on all the eminent, you know, scientists maybe from physics or the maths may be sitting on the other side. To develop AI into next level. You know, for decades, the digital revolution in the government was defined by transition from paper to portals and from physics cues to digital clicks. But today, we are witnessing the birth of the new paradigm. We are moving beyond generative AI that simply answers. We are moving from them to agentic AI that acts now. What I’ve been discussing with Mr. Srinivas just now. And for 30 years, our relationship with technology was a series of commands. We used to give commands and used to get the answers.

We typed, we clicked, we prompted. We were the masters for the such bar. We used to, you know, we were the masters. Nobody can say that. But I stand here. I stand here today. I can see and everybody can see the search bar is dying. In its place, something more profound. Just now Mrs. Sweeney was just telling about agency. It’s just evolving. In the first era of our national building was defined by land. The second by the industry. And third has been defined. More illusory, the intelligence of the system. And the nation that leads this century are those that learn to treat intelligence not as a product but as a form of a public infrastructure.

The idea is no philosophical for our state of Telangana. It is the story of our everyday governance because it’s IT driven state as we are known for. And often say that artificial intelligence has three lives in the country. the first life is in the research labs the second we take into in the policy papers but the third ultimately both of this combined together how we are trying to affect the life that truly matters for each and everybody you know how do we see it is that when ai meets the real challenges of of our lives when artificial intelligence meets the dust where we face ai meets the drought when it meets the monsoons when it meets the markets of the living society and this is where its legitimacy is earned when it really counters this dust doubt monsoons and markets in telangana we see agents not as a tool here we would like to take them as a team mates.

You know as the way the pilots relay and the co -pilots. Tomorrow as our government here in Telangana also see that we relay on AI as co -governors system that can predict a flood before the first cloud gathers over the Moosey. Moosey is our river in the midst of our city. You know allocate resources before the crisis and deliver services before citizen ever need to ask. For example if you take the agriculture a small farmer I hail from a very remote area and that to a rural place. A farmer in my place or in some other place from the rural area when the climate is not environmental concept for them it is right now a daily negotiation with uncertainty.

So when we built our AI advisors we did something unconventional. Right now we are trying to do on the pilot stage. We asked farmers to train the system with us. You know the dialects the soil wisdom the lived patterns become the pattern of the model. This is where the governance comes into picture. To use the best of the technologies where you invent produce or do sitting in R &D use best of your grey matter to come up with some products until and unless we use and induce into our governance there will be no end result. That is what we believe in. That is why our Telugu first AI can record. Land records interpret satellite indicators and compress the time between the climate event and an incident settlement.

So this saved lots of time, you know, for our, you know, government agencies as well as to the end user as a farmer. Our satellite -driven heat analysis no longer stops at mapping temperatures. They now shape zoning, green bells and urban cooling strategies for Hyderabad. Which we are planning to take up to the core by 2035. And across 33 districts in our state, our solar power edge computer nodes ensure that the government service and the climate remains operational when the grid fails. And this is also one of the novel things what the Telangana is the first state where we have implemented. Yet I don’t claim that these are examples for climate. This is just a fact of a story.

This is just a beginning. This is the first preface we can say that because the real breakthrough is not from each project. It is from the architecture that binds together. Our future projects like our coming up the state of art infrastructure in the upcoming AI city, absolutely dedicated AI city and the Bharat future city which shall be the net zero city. Are designed not as a smart districts either for technology or for other aspects, but as a self -learning cities, territories that define sustainabilities, territories which can provide themselves for the compute and make them policy advisors. Our country’s first sovereign AI nerve center. ICOM you know this is our first ever initiative by any state in India that we have come up with the first sovereign AI nerve center that is supposed to be the AI innovation hub that is named as ICOM that the aim and objective is you know this intelligence should be shall go deep beyond just for incubation but also render into R &D and shall be the prime focus of creating AI ready talent for tomorrow’s world and I would like to mention here that Hyderabad and Telangana is the first state to come up with a platform that is This Telangana data exchange platform, the sovereign data open pipeline ensures that the intelligence is grounded in integrity.

So the platform is on the open. And this is the first state we have put all the data on this platform. You know, if we go through it by this open data pipeline, you know, 1 ,084 data sets, they have moved from administrative exhaust to ecological signal. We have created something rare in the global south, that state that generates its own intelligence at a scale. And we have seen the results too. And the results shown. The healthcare doesn’t wait for symptoms. It now anticipates risk. Because the data exchange we have done with our co -partners, even in the healthcare. or with the doctors or with the public health institutions. They are just not waiting to deliver the medication, but predicting the risk and try to put it into action.

And we are not waiting for the heat waves to come. We are trying to analyze through the data and how we should place ourselves and we are preparing corridors for the shed. And farmers also, we believe, using this AI technology, we don’t want farmers to wait for the loss. You know, they have to receive assurance before despair. And we are also planning that infrastructure doesn’t wait to break. You know, it has to whisper when it will fail. You know, when all this, the cutting -edge technologies, especially the AI, deployed with purpose and AI agents offer government something rare in public life. The ability to act before harm, to prepare before shock, to protect before loss and how resilient our infrastructure emerges, how safe the climate resilient cities take shape and how our public services become anticipatory humane and trusted.

And this is the future what we are imagining and we are trying to put all our actions into stream and it is this operating system we dreamt and we started running and I believe the next chapter of the statecraft will not be written in the boardrooms of traditional power centers but in the living laboratories of the global south. In the cities of Hyderabad, and the world can already see a preview of what an intelligent century of governance looks like. Let us leave Bharat Mandapam today. Here, while this great convention is taking place with a shared conviction that the tomorrow we are building is not just the smarter, it is braver. However, and you know, the great caption goes, AI for everyone, AI for human welfare should be the theme.

And also, we should, I as a policymaker, you as a technology expert sitting over there, should aim and anticipate for it. I thank the organizers for giving me, you know, a length of year to air. And I want to hear my pitch on behalf of our state of Telangana. I would like to thank the Salesforce team. especially the team management who are invited me over here for gracing this and having to see you know all the best brains sitting over here and the grey matter who would be doing much more for our welfare of our human being. Thank you very much.

Victoria Espinel

Minister, thank you so much for joining us. We very much appreciate it. It was very exciting to hear what’s happening in Hatsheba and in Teliana. Let’s kick our panel off. Alright, so I am going to start with an icebreaker. Everyone gets 30 seconds to respond. This panel is by AI agents, so what would you say, I’m going to start there and then go towards me, what would you say is the single biggest difference that you see between AI last year, we’ve been sitting here and We’ve been sitting here last year in the AI agents that we are seeing today. Saibul, can you kick us off?

Saibal Chakraborty

So I think in my mind the conversation has moved decisively towards agentic AI. We are no longer talking about, as Honorable Minister also said, about solving discrete problems or discrete searches. We are now looking at end -to -end AI -led execution of business processes or government processes. I think that’s the single biggest change in thinking that has come up.

Victoria Espinel

Professor Lee Tiedrich?

Lee Tiedrich

To put this in context, I was involved in the International AI Safety Report, and we just had our panel on that a little while ago. And Professor Bengio was saying the biggest change from 25 to 26 is the emergence of agentic AI. And my perspective, is its ability not only to do the end -to -end, but to also act on behalf of the… of people is really the big change.

Victoria Espinel

Mike?

Mike Haley

So I’m probably going to jump on the train here. You know, what we were seeing last year was narrow agents able to solve specific problems. What we see now are agents that are able to abstract the problem, chain of thought reasoning, being able to take that and turn it into sequenced action and turn the multiagent sort of systems level thinking. So it’s the move from task specific to systems level is the big shift that I’m seeing.

Victoria Espinel

And Srini? Srini Srinivasan

Srinivas Tallapragada

Yeah, so I think for me the big shift has been from co -pilot human in the loop to agents which can act and really provide value, business value. And that’s been the big shift.

Victoria Espinel

So let’s talk about that value. Let’s talk about AI agents as a forceful multiplier. I’m going to start here this time Srini, you lead engineering for one of the biggest platforms in the world. There’s a lot of discussion about AI agents. Can you demystify this? What does that mean?

Srinivas Tallapragada

Yeah. So I think what does that mean? An agent, just like a human, first of all, an agent has to act. It says agency and it acts. That’s the first big difference. And like any agent, it has to have a couple of things. It has to know a role. Just like a human, it needs to know what it’s supposed to do, what are the jobs to be done. It needs knowledge. Just like a human, if I have in my mind, an agent has to have knowledge, some memory, so both short -term and long -term memory. And then it should also be able to act. You know, it should be able to, in a digital world, should be able to act on an API or something.

And then it should be able to act wherever the surface is. Maybe it’s in WhatsApp channel, wherever the user is interacting with it, in a WhatsApp channel or web channel or a digital channel or a SMS text. More importantly, most important in all of this is we should have guardrails on what it’s not supposed to do. that’s the most important and then all of it has to be covered to make it useful with what we call a trust layer because these things can hallucinate it can have bias, it can have toxicity, avoid all of that and they are unpredictable ultimately so it should have governance then it’s auditability, so you can do all of this this and all is to do all of this is what an agent does so this is also the why even though there is a lot of hype in reality it hasn’t diffused enough, this is the business value which we are trying to bridge as the vendors

Victoria Espinel

Thank you. Saibal I’m going to go to you next, so let’s talk about governance, we sit here in Delhi, the capital of one of the greatest nations of the world, the public sector, are they ready for this, how do we think about that?

Saibal Chakraborty

So I think let me not answer that question, I think the public sector needs to be ready so all the way from managing public finances public procurement managing their workflows and processes better, there is no way that public sector can avoid this. However, as Shrini, you pointed out, the stakes here are very, very high. So imagine an agent crafting an RFP, a multi -million or a billion dollar RFP on behalf of the government. How do we and you know in public procurement, we often sacrifice speed for procedural tightness. So how do we actually, what guardrails do we put around an agent or more? So can it really be end -to -end? Can it really be fully autonomous?

Or do I still need that last human layer to make sure that the T’s are crossed, the I’s are dotted because the stakes are really high and a mistake can really, you know, lead to a lot of negative impact. So I think the public sector has to be ready. but I think some of these guardrails has to be thought through and in the context of public sector are agents fully autonomous or do they still automate or do they still operate with a little bit of that human layer I think that has to be thought through.

Victoria Espinel

That’s great thank you. I love that you said RFPs because that’s a concrete example so let’s talk a little bit about use cases and Mike I’m going to go to you let’s talk about resilient infrastructure one of the examples I hear a lot for AI agents they can help you make reservations and I love to eat I think making restaurant reservations actually pretty valuable to me but could an AI agent do something like design a bridge could it design an energy grid like where do we stand between reality and science fiction?

Mike Haley

Yes I think we’re tracking pretty quickly to agents being able to do just those kinds of things In the past, what’s been difficult is using computational methods in AI, which has been around for a reasonable time for these things, has been very difficult. Because if you’re using some form of computational method or AI to design a bridge, you have to specify that bridge perfectly. You have to give it perfect inputs. Now, it turns out that when a designer is designing something, they don’t have perfect inputs. That’s the process of design is actually figuring out what your inputs are, right? So this has always been a little bit of a barrier for people to use these advanced methods.

With AI, and specifically AI agents, you’ve now got a much easier way of interacting. It’s more forgiving towards fuzzy requirements and earlier stages of thinking. It’s able to give you things that inspire you. So one of the things I talk a lot about publicly is that the notion of agents and creatives working in a loop together, that it’s breaking the cycle where the engineer has to come up with every idea from scratch, from a beginning. Rather describe what you’re doing, let the agents explore. So I’ll give you one example specifically in infrastructure because you wanted to get concrete. I mean, something that we work with is water systems, for example. So we’ve built AI agents that can analyze floodplains.

They can analyze how you might want to think of water drainage and these kind of things. So every time you’re making a decision early on in your design, you can let this thing run through, and it’s going to optimize your design in order to ensure that drainage is going to be successful on that. Now, drainage seems like a small little side thing, but it’s a pretty massive part of infrastructure. And having an agent handle that for you, it’s a pretty big deal.

Victoria Espinel

Mike, I have very close family ties to Louisiana, so drainage and flood zones, that is not a small thing. That is a very, very big thing. And actually, that’s a perfect segue to the question I wanted to ask Srini. So one of the most complex things that a government might have to deal with is disaster response. Is that a place where AI agents could be helpful?

Srinivas Tallapragada

I really like the theme, welfare for all. And I think while we can think of very big things of where the AI is doing, AI can add value right now. And disaster response is one good example. Another small example. Another example which I wanted to give was like, you know, the key is to give back time to the people. That’s very valuable. Giving back time is a very noble goal in my opinion to everybody. So we have this very interesting use case where there is a city in New Thames in UK where they created an agent called Bobby. It’s like Bobby is a UK term for policeman and the citizens are asking a lot of questions which are not emergency and Bobby is answering them.

More than 90 % of them they get a lot of value. What was interesting for me was we have another city in Tasmania which is using a product agent force to roll out agents to their police people more than thousand police people because lot of times when they are in the field the policemen new or more experienced they have lot of questions and they are asking and they call this agent Terry and lot of policemen say Terry is their best partner. you know they have been so I think while we can think about futuristic ways here and now there are a lot of things we can provide right now with the technology guardrails in the public sector in private sector obviously where with if you have the right platform where you have trust governance as a foundational value with all the right guardrails we can still add a lot of value and we are seeing thousands of examples across public and private sector where you have the crawl walk run mode you know you start something basic you can still add value you still have the most esoteric cases with multi agent orchestrations I feel like but you can start with basic today and still get a lot of value that’s what we are seeing

Victoria Espinel

that’s great so professor Tiedrich we’ve talked a little bit about how agents can help governments serve their publics are there are there risks there Are there risks of over -reliance?

Lee Tiedrich

Yeah, I mean, there are definitely risks, and I think I share the view of my co -panelists that I think there’s a lot of benefits to using AI in government and improving government services worldwide. But like everything else, we have to do it cautiously and smartly, and I think some of it kind of comes back to the human factor, like pick your use cases wisely. One of the themes in the safety report is that AI is emerging very jaggedly. We have some use cases like computer programming that are really good. There are others that may not be quite ready for prime time. So I think when we think about over -reliance is thinking about where AI is excelling, focusing on those use cases, and maybe doing sandboxes around some of the others to give them a little bit more time to mature.

I think also the over -reliance, picking up on some of the great points, is the guardrails. You know, one of the things in the safety report is good news. We’ve made a lot of progress on guardrails and risk management, but still as the technology moves quickly, a lot more work to be done. And so not relying too much that we overlook guardrails and thinking about where humans should be in the loop. And then the third thing I’ll just mention is, you know, the interoperability of different agents. And as agents start to call upon third -party agents, it’s just thinking through, you know, what guardrails, how do you choose that? How do you allocate liability? How do you test the agents that you’re going to bring into your system?

Victoria Espinel

So guardrails have come up. Srini mentioned it. You just mentioned it. Let’s talk about guardrails a little bit. So, Srini, we hear about chatbots. We hear about hallucinations. Those can be annoying. When you’re talking about a government deploying an AI system, AI agents, the consequences can be extremely significant. They can be a hallucinating of an agent can be quite dangerous. So let’s talk about guardrails. How do you engineer trust into a system so that a minister, a secretary, a secretary can be able to say, feel confident that that’s a tool that they can use to serve their people?

Srinivas Tallapragada

drift, they can hallucinate. So you need a command center where you can say all of it is, this is the difference between a pilot or a demo, which you can find thousands of demos in YouTube versus real life, where these things become So we had to build all of these things for both the customers or governments to build confidence, they can audit, they can test, not even if themselves an independent party also can test, all of this infrastructure is what is required to make this a reality, but once you do that, there’s a huge value you can immediately provide to either the customers or the citizens.

Mike Haley

Can I just add to that quickly? Because I think you hit a really interesting point at the end there. When people talk about guardrails, they think of guardrails as this perfect thing, that at some point the guardrails are going to get strong enough that every result is perfect, it’s completely predictable, and we’re good. And I think we need to talk about the honesty of that. We’re talking about systems that are inherently probabilistic systems. You’re never going to make a probabilistic system 100 % deterministic. It’s an acronymism. Right. So what we’ve discovered is that, I mean, you do do all the guardrails work that we’re all talking about, but where you were going at the end there about making systems that can look at the accuracy of what’s produced, give you some feedback on how accurate the solution or how well it’s going to perform, and then, and this is very important, what we’ve discovered is giving control to the human being, giving control, in our case, to an engineer, right, who is able to say, oh, I get it.

This is kind of, the result is a little off. I’m going to give it some more feedback. I’m going to reassess the results. I’m going to run it again. Or I might even go in myself and kind of tweak that information. And what we’ve discovered, when I’m talking to an engineer and explaining how this stuff works, if I don’t give them that level of control, they don’t trust the system. The minute they know they can actually control it, so it’s not, trust doesn’t depend on a perfect answer. Trust actually depends on transparency and understanding and then the ability to come in and control something.

Victoria Espinel

But I think that’s also because the engineers understand this. This is the tool. It’s a tool for them to use. to help them. It’s not something that is going to take control. Is there anything specifically with respect to infrastructure that you think government should be mindful of?

Mike Haley

Yeah. Well, look, so, I mean, infrastructure is not known as the easiest and quickest thing to build, right, in countries. And I think, you know, one of the really boring things but absolutely necessary things with infrastructure is to make sure your digital ecosystem around that infrastructure is set. And I see a lot of places in the world getting into building infrastructure trying to do this quickly without getting all that digital infrastructure in place. So building information, modelling, ensuring that every part of your infrastructure is correctly modelled, it’s represented at the right level. AI is not going to just magically come in and solve a bunch of problems unless you’ve got a lot of that digital stuff in place already.

So it’s kind of a little bit of the boring work, but getting that stuff in place early is one of the biggest things. I mean, I’ve had a number of conversations here this week about the 2047 initiative in India, the Man. of infrastructure that needs to be built in this country and the importance of using something like building information modeling, getting standard data, getting that in place now. If you get that in place now, all this AI goodness is way easier to deploy against it.

Victoria Espinel

Yeah, please.

Srinivas Tallapragada

Yeah, so I think I heard a lot of discussion around sovereignty, and I think the way we should think of sovereignty is two levels. There’s strategic sovereignty and technical sovereignty. So by strategic sovereignty, I mean is like you get control on data, your governance policies, you know, and your operational policies. That I think you can implement it right now and get value. And I think and then on the technical one where people want to control their entire supply chain from the chips and all, I would like to for governments to and public officials and policy officials to think is as two tracks. One. One takes longer in a lot of capital investment. Don’t let the second track.

stop getting the benefit of the first track the first track is easy you can ensure the data doesn’t leave your country your policy guardrails have control human in the loop you still get a lot of benefits while you still want to continue on the second track that would be my request to all the government

Saibal Chakraborty

can I just make a quick build on what Mike said because I do a lot of my work in the public sector with governments I think one of the biggest guardrails beyond policies is actually the skilling the upskilling like Mike you said it’s a probabilistic system inherently right so you cannot expect it to give correct results all the time there’s nothing called a correct result so the person who’s actually using this who’s using the tool at the district level, at the state level to make real government decisions that person is not an AI engineer that person needs to be upskilled and needs to be told what can be trusted and what requires that additional layer of check.

So I think if agent TKI has to take off in public sector at scale, then that upskilling at various levels of the government on what can be trusted and what cannot be trusted is also a very, very big component.

Victoria Espinel

Yes, I totally agree. Professor, I wanted to ask you, so it feels so trite to say technology is moving really quickly, but in the last few years, I mean, AI is moving very, very quickly. We’ve talked a lot about guardrails. How should governments think about this? I mean, how are governments going to be able to keep up in terms of setting government expectation, setting potentially regulation for a technology that is moving so quickly?

Lee Tiedrich

It’s a hard one. I think, you know, AI has evolved into a global multi – disciplinary field. And I think, you know, we need to bring the global community together. I think we need policymakers, lawyers, talking with engineers, talking with sector specialists to really inform the policy in real time. I mean, I’m a big fan. I spent a year working at NIST, the U .S. National Institute of Standards and Technology. And, you know, we need to figure out how to do some of the guardrails, you know, starting with the science. And then the science can inform, you know, how to develop the standards, how to develop the evals. And then it becomes, you know, a question.

I mean, different countries have different views on whether we should regulate or not regulate. You know, the U .S. has a very deregulatory approach. Europe is the opposite. But if we can kind of agree on what those common standards are for evaluation and testing, then governments can be free to decide, do we mandate this or not mandate that? And I think one important thing is that we need to be able to do that. And I think that’s one of the things that we need to be able to do. And I think that’s one of the important nuance to add to the mix. And this has been a theme of the conference. is, you know, we have to, well, we want some standardization on these evaluation mechanisms.

We have to recognize that we speak different languages. We have different cultural norms. So when we want to have standardization, we’ve got to be able to localize what the evaluation looks like is what might be appropriate in one country isn’t going to be appropriate in another country. So it’s hard, but I think, you know, starting with the science, the scientific report, I would point people to, you know, building on that, working through the AC network, working through standards organizations and all these other initiatives to develop the evaluation, build that evaluation ecosystem, and then regulations can kind of overlay on top of that as policymakers think appropriate for their jurisdictions.

Victoria Espinel

But if I could ask a follow -up question to you or any of the panelists, I mean, I think one of the challenges there for companies is that it’s really helpful for companies. I also speak for enterprise software companies that I represent. It’s helpful to know what those government expectations are. Like industry is looking for clarity and predictability. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And

Mike Haley

Should I take a shot at it? Yeah, I see. As a software provider, you know, at Autodesk, we definitely deal with that, Victoria. You know, we’ve had a couple approaches. One, I mean, we’re obviously going to stay on top of this all the time, working with governments, making this part of a conversation. I spend a good part of my year traveling around the world, talking to governments and trying to sort of help them understand what needs to happen, but also help us understand, like you said, what they’re wanting. But the main problem is just the sheer variance. I mean, even within the United States, we have things between different state efforts, right? And then you get around the world, it just gets even more complicated.

What we’ve tried to do is we’ve – We’ve tried to run as far ahead of this as we can. So if there is a way that we can build in good controls right from the beginning, we actually build those controls to the maximum extent that we can within reason, right? So what we’ve done is we’ve found now that we’ve run, I’ll give you an example. In every AI feature we have in our software, we have something called a transparency card, which looks like a nutrition label on food. But that nutrition label tells you what kind of model is behind it, what data was used to train it, what kind of level of control you have, what accuracy it has, any bias that we know about in the model, that kind of stuff.

And it’s a standard thing. So we rolled that out about a year ago really to try and stay ahead of things. So if governments started asking for these things, well, we’ve got a transparency card. What’s actually happened now is that there’s a bunch of interest in that becoming part of a standard. So I mean I’m not saying that really just to tout us because I think other companies are doing great things in this space as well. You guys are doing a bunch of good stuff in this space too. I think this is an opportunity for us. For us in industry to run ahead to try and help define some of these things because it is moving so fast.

And I hate to, maybe I shouldn’t say this publicly, but the government doesn’t always have the best answers, right? So, I mean, we can work with government to help them develop those answers and come up with good things, which helps us then, you know, resist some of the complexity that’s coming down the line.

Srinivas Tallapragada

Yeah, so one of the challenges in this is you can project it too much. It’s an exponential curve. It’s very hard to project. So I think sometimes it’s learned by doing. I think the biggest thing the government can, all governments can do is the policy framework on how to update these standards. Today, usually it takes a long time, and so everybody’s afraid, and then it takes even harder to change a standard. So then everybody, so then they try to solve everything, and things are changing. And so I think the main thing policymakers could do is, just like the feedback loop, if there’s a way to improve. If there’s a way to improve the policy framework, because then you don’t need to be afraid of getting everything right.

You know, you understand that, hey, you told me some basics, and as new data comes in, you can update it. And then I think that, I think that in engineering and product, we call this the product feedback loop and agile development. If we have something equivalent on that, then I think then everybody is clear because we all want the right thing. I think there’s no disconnect in the foundational thing. We want AI to help, correct, in a positive way to our net positive way to our entire community. And the regulatory framework in a changing technology, if the regulatory framework is able to change, if we can change that, then we are not afraid to say that we need to get everything right on day one.

And we can learn by doing it. So agile regulation.

Victoria Espinel

I have loved this panel. Unfortunately, we’re coming to a close. So I’m going to ask each of you one final question and Saibala, I’m going to start with you and then head this way. But if we were so fortunate to meet again in three years, so fortunate to meet in Delhi again in three years, looking back what would you say is the one thing that you think would be the best way to determine whether or not we have succeeded in addressing some of these challenges i know it’s a big question sorry but uh thank you

Saibal Chakraborty

so since we’re in uh delhi i’ll give the answer in the indian context i think as um the primary theme or one of the primary themes of this particular conference is inclusivity so for me the success of ai the true success of ai will be if a farmer could talk to a small language model powered tool in his or her own vernacular language and get practical advice on how to manage the crop how to manage the cattle and if that could be scaled up uh across the board i think that would be a very good idea across the length and breadth of india Then I think that, for me, is the real win for AI.

That’s a

Victoria Espinel

big win. I mean, that’s a significant impact. Thank you. Great. Professor Tietrich? Yeah, so

Lee Tiedrich

I’m kind of coming back to the evaluation ecosystem. We’ve made a lot of progress over the last couple of years, but more work needs to be done. You know, more countries, including the Global South, are launching ACs, you know, AI safety or security institutes, which is not hard regulation, binding regulation, but it’s governments weighing in. And I think real progress three years from now, we have an active AC institute that’s sharing information, making real progress on evaluation techniques, and one of the commitments that came out of some of the companies yesterday is, you know, also localizing that so everybody can benefit from that Global North, Global South. Thank you.

Victoria Espinel

Mike? So earlier

Mike Haley

on, I spoke about infrastructure and, you know, physical infrastructure that is in countries. What I would hope to see is, in a couple of years, in a couple of years’ time, we’re actually seeing infrastructure genuinely get developed faster than it’s ever been developed, which is a really, really tough problem, making that happen in the physical world. So as a measure of AI truly doing this, that’s an incredible measure. But on top of that, it needs to be doing that without compromising safety, without compromising. It’s not a big black box that nobody understands, right? So what I would love to see is not only is that infrastructure being developed faster, but the public is engaged with it.

The engineers and people that are doing it feel comfortable with it. They feel secure. They feel fine signing off on that because they feel that this is reliable. Thank you.

Victoria Espinel

Srinu? If AI

Srinivas Tallapragada

is so revolutionary as we all assume, I would hope in three years, the bottom 50 % income percentile, their per capita income has been measurable. That for me is the real impact of this technology. That’s

Victoria Espinel

fantastic. I want to say thank you to all of our panelists I want to say a special thank you to Srini and to Salesforce for bringing us all together here today thank you to our audience for joining us big round of applause for our panelists thank you you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (31)
Factual NotesClaims verified against the Diplo knowledge base (8)
Confirmedhigh

“Victoria Espinel opened the session by welcoming a “very special guest”, Minister Sridhar Babu, and invited him to the podium for a keynote address”

The opening remarks with a very special guest, Minister Babu, are recorded in the transcript excerpts [S2] and [S21].

Confirmedhigh

“He framed the moment as a historic inflection point for governance and positioned AI as essential public infrastructure comparable to roads or electricity”

The keynote description of AI as a fundamental inflection point in governance and as public infrastructure is corroborated by the summary in [S5].

Additional Contextmedium

“He argued that the era of “generative AI that simply answers” is giving way to “agentic AI that acts now””

The shift toward “agentic AI” is discussed in the broader AI literature cited in [S26], which frames agentic AI as the next stage beyond reactive tools.

Additional Contextlow

“AI was positioned as a “co‑governor” for critical functions such as flood prediction on the Moosy river in Hyderabad”

While the transcript does not mention the Moosy river, the concept of AI as a co-governor for essential public services aligns with the view of AI as critical infrastructure in [S56].

Additional Contextmedium

“He described agents as teammates, pilots and co‑pilots that work alongside human operators”

The taxonomy of agents as copilots, autopilots, etc., is elaborated in [S93], providing nuance to the report’s description of agents as teammates and pilots.

Confirmedhigh

“Srinivas Tallapragada summed up the shift as moving “from co‑pilot human in the loop to agents which can act and really provide value””

The same wording appears in the transcript excerpt [S17], confirming Tallapragada’s statement.

Additional Contextmedium

“AI was presented as a public‑infrastructure foundation that must remain resilient, similar to financial systems, to ensure continuity of services”

The role of AI as critical infrastructure for service continuity is discussed in [S56], adding detail to the report’s claim.

Additional Contextmedium

“Telangana has launched an autonomous AI body (Aikam) to serve as a sovereign “AI nerve centre” for the state’s data strategy”

The establishment of the Aikam autonomous body, aimed at positioning Telangana as a proving ground for large-scale AI deployment, is described in [S96]; this provides background for the reported AI nerve-centre concept.

External Sources (100)
S1
Welfare for All Ensuring Equitable AI in the Worlds Democracies — – Lee Tiedrich- Amanda Craig Deckard – Lee Tiedrich- Sachin Kakkar
S2
Agents of Change AI for Government Services &amp; Climate Resilience — – Mike Haley- Lee Tiedrich- Srinivas Tallapragada
S3
Agents of Change AI for Government Services &amp; Climate Resilience — – Minister Sridhar Babu- Srinivas Tallapragada
S4
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This panel discussion on heterogeneous computing and AI infrastructure in India brought together leading experts from in…
S5
Agents of Change AI for Government Services & Climate Resilience — Saibal Chakraborty noted that conversations have moved decisively towards end-to-end AI-led execution of business and go…
S6
S7
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — economy. Saibal Chakraborty, Managing Director and Senior Partner, Boston Consulting Group. The moderator, C .V. Madhuka…
S8
https://app.faicon.ai/ai-impact-summit-2026/panel-discussion-ai-in-digital-public-infrastructure-dpi-india-ai-impact-summit — economy. Saibal Chakraborty, Managing Director and Senior Partner, Boston Consulting Group. The moderator, C.V. Madhukar…
S9
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-in-digital-public-infrastructure-dpi-india-ai-impact-summit — economy. Saibal Chakraborty, Managing Director and Senior Partner, Boston Consulting Group. The moderator, C.V. Madhukar…
S10
Agents of Change AI for Government Services & Climate Resilience — Saibal Chakraborty noted that conversations have moved decisively towards end-to-end AI-led execution of business and go…
S11
Agents of Change AI for Government Services &amp; Climate Resilience — – Mike Haley- Srinivas Tallapragada – Minister Sridhar Babu- Srinivas Tallapragada – Saibal Chakraborty- Srinivas Tall…
S12
Agents of Change AI for Government Services &amp; Climate Resilience — -Victoria Espinel- Panel moderator and discussion facilitator
S13
FOSTERING FREEDOM ONLINE — – 606 Europe’s top court: people have right to be forgotten on Internet. Reuters. 13 May 2014. http:// www.reuters.com/…
S14
WSIS+20 Open Consultation session with Co-Facilitators — – **Jennifer Chung** – (Role/affiliation not clearly specified)
S15
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Minister Babu acknowledges the infrastructure challenges raised by the technical experts and commits to providing the ne…
S16
AI Meets Agriculture Building Food Security and Climate Resilien — A very good morning to all of you. Shri Devesh Chaturvedi, Rajesh Agarwal, Vikas Rastogi, Mr. Jonas Jett, Shubhati Swami…
S17
https://dig.watch/event/india-ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — This is just a beginning. This is the first preface we can say that because the real breakthrough is not from each proje…
S18
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Thanks so much, Madhu, and to all of our panellists for what was, I think, a very rich and grounded and also at times hu…
S19
Transforming Health Systems with AI From Lab to Last Mile — All of us here, we would have visited doctors at some point in time or have been sick. Anyone who has never visited a do…
S20
Building Inclusive Societies with AI — India is great at putting out fantastic reports. At the end of the reports, who is charged with the execution? Who is re…
S21
https://app.faicon.ai/ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — Or do I still need that last human layer to make sure that the T’s are crossed, the I’s are dotted because the stakes ar…
S22
Building the Workforce_ AI for Viksit Bharat 2047 — But I would not squadron of these children going through. of the exhibitions, exhibits. As artificial intelligence becom…
S23
From Innovation to Impact_ Bringing AI to the Public — The conversation also touches on AI’s tendency to be overly accommodating and its limitations in regulated industries wh…
S24
AI for Good – food and agriculture — – Development of multilingual chatbots for farmers Dongyu Qu: Excellencies, ladies, gentlemen, good morning. A year ago…
S25
AI for agriculture Scaling Intelegence for food and climate resiliance — A lot of questions in the same question. So what I’ll do is I’ll just first take you through the initiatives. First of a…
S26
WS #283 AI Agents: Ensuring Responsible Deployment — Prendergast frames agentic AI as a critical technological shift where AI has evolved beyond reactive tools to become pro…
S27
Keynote-Nikesh Arora — Arora argues that when AI moves from providing advice to taking autonomous actions, determining who is responsible for t…
S28
How AI Drives Innovation and Economic Growth — Michael highlights specific examples of successful public sector AI applications that demonstrate the potential for gove…
S29
The Future of the Internet: Navigating the Transition to an Agentic Web — Transparency and clear guardrails are essential for users to understand what data is used and what actions are taken
S30
Panel Discussion Data Sovereignty India AI Impact Summit — High level of consensus with complementary perspectives rather than conflicting viewpoints. The implications suggest a m…
S31
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S32
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S33
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — A significant gap remains between high-level policy requirements and practical technical implementation. Whilst basic IT…
S34
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — The discussion revealed that 19 African countries have developed national AI strategies, representing significant progre…
S35
Agents of Change AI for Government Services & Climate Resilience — So I think let me not answer that question, I think the public sector needs to be ready so all the way from managing pub…
S36
WS #283 AI Agents: Ensuring Responsible Deployment — Anne McCormick: Thank you, Anne McCormick, EY, Global Head of Public Policy. I’m interested in this context of policy no…
S37
Agents of Change AI for Government Services &amp; Climate Resilience — So I think let me not answer that question, I think the public sector needs to be ready so all the way from managing pub…
S38
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Katarina de Brisis: Thank you. Let me start with a couple of reflections on the challenges when implementing AI. For us,…
S39
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Evidence:2016 National Health Policy was first to explicitly address both private and public sectors, unlike 2002 policy…
S40
Keynote-Brad Smith — Summary:The transcript shows strong internal consistency in Brad Smith’s arguments around AI’s potential to address glob…
S41
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — While participants agreed on core objectives, they differed on implementation approaches and priorities. Some speakers e…
S42
Ad Hoc Consultation: Wednesday 31st January, Morning session — Nauru’s position champions the cause of developing states and converges with SDG 9 and SDG 16. The latter focuses on fos…
S43
Panel Discussion Data Sovereignty India AI Impact Summit — High level of consensus with complementary perspectives rather than conflicting viewpoints. The implications suggest a m…
S44
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Summary:The discussion revealed relatively low levels of fundamental disagreement, with most differences centered on imp…
S45
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Moderate disagreement with significant implications. The disagreements are not fundamental conflicts but represent diffe…
S46
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — The discussion revealed relatively low levels of fundamental disagreement, with most differences centered on implementat…
S47
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S48
How AI Drives Innovation and Economic Growth — Summary:The speakers show broad agreement on AI’s transformative potential for development but significant disagreements…
S49
Building Sovereign and Responsible AI Beyond Proof of Concepts — Countries face difficult trade-offs between speed of AI adoption and maintaining sovereignty, often choosing slower deve…
S50
Part 2.5: AI reinforcement learning vs human governance — Governance structures are designed to maintain order, protect rights, and promote welfare, often requiring consensus and…
S51
Driving Social Good with AI_ Evaluation and Open Source at Scale — Moderate disagreement with significant implications. The disagreements reflect deeper tensions between technical efficie…
S52
Musk’s Grok AI struggles with news accuracy — Grok, Elon Musk’s AI model available on the X platform, encountered significant issues in accuracyfollowing the attempte…
S53
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Context is highlighted as a crucial element for effective engagement in standards development. Australia’s experts have …
S54
Comprehensive Discussion Report: AI Agents and Fiduciary Standards — International cooperation and enforcement Regulatory approach and implementation strategy
S55
Agentic AI in Focus Opportunities Risks and Governance — “We want standards.”[2]. “So we’re talking about standards.”[4]. “We’re talking about technical benchmarks.”[31]. “Don’t…
S56
AI as critical infrastructure for continuity in public services — Awareness and capacity gaps exist in understanding available standards and building blocks Simple communication and pra…
S57
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — Consensus level:High level of consensus with significant alignment across different stakeholder perspectives (government…
S58
From agentic AI to agreement technologies: LLMs as a new layer in diplomatic negotiation — For observers outside computer science, this can feel like a genuine shift: AI that no longer only responds to queries, …
S59
Agents of Change AI for Government Services &amp; Climate Resilience — Because all the technocrats are sitting on all the eminent, you know, scientists maybe from physics or the maths may be …
S60
WS #283 AI Agents: Ensuring Responsible Deployment — Prendergast frames agentic AI as a critical technological shift where AI has evolved beyond reactive tools to become pro…
S61
From Innovation to Impact_ Bringing AI to the Public — Discussion point:Agent-to-agent communication replacing human-app interaction Discussion point:Paradigm shift from apps…
S62
Agents of Change AI for Government Services & Climate Resilience — Evidence:Comparison to product feedback loops and agile development practices, noting that exponential technology curves…
S64
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Human-in-the-loop governance is essential – accountability cannot be outsourced to algorithms
S65
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — Third, ensuring transparency in AI systems:Commanders must understand the data sources, training methodologies, and deci…
S66
Building Indias Digital and Industrial Future with AI — Data Sovereignty Beyond Localization: The conversation explored a nuanced definition of data sovereignty that goes beyon…
S67
Agentic AI in Focus Opportunities Risks and Governance — And that’s what we see from customers in terms of how they want to leverage data. So that’s one of my favorite use cases…
S69
Opening address of the co-chairs of the AI Governance Dialogue — ## Commitment to Actionable Outcomes Majed Sultan Al Mesmar: Bismillah ar-Rahman ar-Rahim. Excellencies, distinguished …
S70
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Respected Honorable Chairman, Distinguished Speakers, Eminent Guests, Colleagues and Participants. It is my privilege to…
S71
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — On the more financial intelligence side, the input here would be more structured data, for example. So there are so many…
S72
Media Briefing: Unlocking the North Star for AI Adoption, Scaling and Global Impact / DAVOS 2025 — The overall tone was optimistic and forward-looking. Panelists expressed excitement about AI’s capabilities and potentia…
S73
Industries in the Intelligent Age / DAVOS 2025 — The overall tone was optimistic and forward-looking. Panelists were enthusiastic about AI’s potential while also acknowl…
S74
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — The overall tone was optimistic and forward-looking. Panelists were enthusiastic about quantum computing’s potential whi…
S75
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S76
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S77
Seventieth session — 9. The ICT environment offers both opportunities and challenges to the international community in determining how norms,…
S78
Dedicated stakeholder session (in accordance with agreed modalities for the participation of stakeholders of 22 April 2022)/OEWG 2025 — Centre for International Law: Thank you, Chair. In the interest of time, we will deliver a truncated version of a stat…
S79
Software.gov — In conclusion, digital public infrastructure is essential, especially during crises like the COVID-19 pandemic. It enabl…
S80
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — ## Concrete Examples of Multi-Stakeholder Success Hisham Ibrahim provided specific regional examples, including Saudi A…
S81
Agenda item 5: Day 2 Afternoon session — Belarus:Distinguished Mr. Chair, given the great importance of cyberspace in people’s lives, Belarus has taken a number …
S82
HIGH LEVEL LEADERS SESSION IV — The analysis highlights several key points regarding the importance of a human rights-based approach to new technologies…
S83
Scoping Civil Society engagement in Digital Cooperation | IGF 2023 — Sheetal Kumar:Yeah. Positive. Okay, great. Thank you for that feedback. So the artificial intelligence paragraph is quit…
S84
Keynote-António Guterres — We need guardrails that preserve human agency, human oversight and human accountability
S85
Agentic AI in Focus Opportunities Risks and Governance — Enterprise Guardrails and Risk Management: Panelists emphasized the critical importance of implementing robust safety me…
S86
Science as a Growth Engine: Navigating the Funding and Translation Challenge — And that can also, then, decrease the industries wanting to invest if the hurdle of an extra three or five years of regu…
S87
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S88
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — It is important to understand that the regulatory sandbox is not a decision-making or exemption-providing mechanism. Ins…
S90
Thinking through Augmentation — While Ucuzoglu is optimistic about the long-term impact of transformative technology, he acknowledges that it is not an …
S91
Multistakeholder Partnerships for Thriving AI Ecosystems — This comment introduces a sophisticated understanding of AI infrastructure needs, moving beyond simple data collection t…
S93
The Intelligent Coworker: AI’s Evolution in the Workplace — So again, we break down the framework into of coworker into three buckets. Copilots, autopilots, and infinite pilots. An…
S94
Challenging the status quo of AI security — Babak Hodjat: Thank you very much, Sounil. Yeah, we came out here for two reasons, as cognizant, one, to get people invo…
S95
From Technical Safety to Societal Impact Rethinking AI Governanc — You know, their language is not represented in Gemini or anything, right? And I know everybody wants to impose Hindi on …
S96
Telangana launches Aikam to scale AI deployment — The Telangana government haslaunchedAikam, a new autonomous body aimed at positioning the state as a global proving grou…
S97
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — The system demonstrated several advanced features during the presentation:
S98
AI 2.0 The Future of Learning in India — Yes, as a regulator for teacher education, now Vixit Bharat Adhishthan is coming where it has been proposed to go with A…
S99
https://app.faicon.ai/ai-impact-summit-2026/ai-20-reimagining-indian-education-system — If we take AI out of Western knowledge, if we promote it in Indian knowledge, Indian context, Indian languages, then we …
S100
Building the Workforce_ AI for Viksit Bharat 2047 — Dr. Singh’s key assertion was that “artificial intelligence can substitute everything on this planet but it cannot subst…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Minister Sridhar Babu
6 arguments122 words per minute1656 words811 seconds
Argument 1
Evolution to agentic AI as public infrastructure and co‑governor (Minister Sridhar Babu)
EXPLANATION
The minister describes a shift from generative AI that merely answers queries to agentic AI that can act autonomously as part of public infrastructure. He frames AI as a co‑governor that can anticipate events and allocate resources before citizens request services.
EVIDENCE
He explains that the world is moving beyond generative AI to “agentic AI that acts now” and that the search bar is dying, replaced by a more profound system (lines [21-24][33-35]). He then states that intelligence should be treated as public infrastructure and that Telangana sees AI as a co-governor that can predict floods before clouds gather over the Moosey river and allocate resources proactively (lines [40-41][45-47]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion frames AI as a fundamental inflection point and essential public infrastructure, comparable to roads or electricity, and notes policymakers’ role in providing foundational resources [S5][S15].
MAJOR DISCUSSION POINT
Evolution to Agentic AI
AGREED WITH
Saibal Chakraborty, Lee Tiedrich, Mike Haley, Srinivas Tallapragada
Argument 2
AI for flood prediction, agricultural advisory, climate‑resilient services (Minister Sridhar Babu)
EXPLANATION
The minister outlines concrete public‑sector uses of AI agents, including early flood prediction, farmer advisory systems, and climate‑responsive urban planning. These applications aim to make services anticipatory rather than reactive.
EVIDENCE
He cites AI predicting floods before clouds appear and allocating resources ahead of crises (lines [45-47]), training AI advisors with farmers’ dialects and lived knowledge to support agriculture (lines [48-53]), using satellite-driven heat analysis for zoning and urban cooling strategies (lines [58-62]), and deploying solar-powered edge computers to keep services running during grid failures (lines [63-64]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Agricultural advisory systems, multilingual farmer tools, and climate-resilient urban planning are highlighted in AI for agriculture and climate-resilience sources [S16][S24].
MAJOR DISCUSSION POINT
Applications of AI Agents in the Public Sector
AGREED WITH
Mike Haley, Srinivas Tallapragada, Saibal Chakraborty
Argument 3
Creation of a sovereign AI nerve centre and an open data‑exchange platform for Telangana (Minister Sridhar Babu)
EXPLANATION
The minister announces Telangana’s first sovereign AI nerve centre (ICOM) and a state‑run open data‑exchange platform that consolidates thousands of datasets for AI‑driven governance. This infrastructure is presented as a foundation for trustworthy, locally grounded intelligence.
EVIDENCE
He describes ICOM as the first sovereign AI nerve centre and AI innovation hub, and mentions the Telangana data exchange platform that hosts 1,084 datasets, turning administrative exhaust into ecological signals (lines [72-79][74-77]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The sovereign AI nerve centre and the state-run open data-exchange platform are described as foundational to trustworthy AI-driven governance [S5].
MAJOR DISCUSSION POINT
Sovereignty and Data Strategy
DISAGREED WITH
Mike Haley, Lee Tiedrich
Argument 4
AI agents can provide proactive healthcare risk prediction, anticipating illnesses before symptoms appear.
EXPLANATION
The minister describes how AI-driven systems can monitor health data to identify emerging risks early, allowing preventive actions rather than reactive treatment.
EVIDENCE
He states that “The healthcare doesn’t wait for symptoms. It now anticipates risk. Because the data exchange we have done with our co-partners, even in the healthcare… they are predicting the risk and try to put it into action” (lines [80-84]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Health-system transformation with AI-driven risk prediction and early-warning health interventions is discussed in the health AI source [S19].
MAJOR DISCUSSION POINT
Applications of AI Agents in the Public Sector
Argument 5
Deploying solar‑powered edge‑computing nodes ensures continuity of government services and climate monitoring during grid outages.
EXPLANATION
The minister explains that edge computer nodes, powered by solar energy, keep critical public‑sector applications running when the main power grid fails, enhancing resilience of services and climate‑related operations.
EVIDENCE
He notes that “Across 33 districts in our state, our solar power edge computer nodes ensure that the government service and the climate remains operational when the grid fails” (lines [63-64]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Infrastructure challenges and the need for foundational compute resources, including edge nodes, are highlighted in the heterogeneous compute discussion [S15].
MAJOR DISCUSSION POINT
Applications of AI Agents in the Public Sector
Argument 6
Future AI‑focused cities (AI City, Bharat Future City) are envisioned as self‑learning, net‑zero territories that generate their own compute and policy advice.
EXPLANATION
The minister outlines plans for an AI city and a net‑zero “Bharat Future City” that will operate as self‑learning ecosystems, providing compute resources and acting as policy advisors for sustainable urban governance.
EVIDENCE
He describes these projects as “designed not as smart districts… but as a self-learning cities, territories that can provide themselves for the compute and make them policy advisors” and calls it “our country’s first sovereign AI nerve centre” (lines [70-73]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of a self-learning AI city and its architecture is outlined in the AI city preview source [S17].
MAJOR DISCUSSION POINT
Future Vision and Success Metrics
S
Saibal Chakraborty
4 arguments152 words per minute569 words224 seconds
Argument 1
Shift to agentic AI enabling end‑to‑end execution of processes (Saibal Chakraborty)
EXPLANATION
Saibal argues that the conversation has moved from solving isolated problems to using agentic AI for end‑to‑end execution of business and government processes. This represents a fundamental change in how AI is applied at scale.
EVIDENCE
He states that the discussion has moved decisively towards agentic AI and that we are no longer talking about solving discrete problems but about end-to-end AI-led execution of business or government processes (lines [110-113]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift toward agentic AI for end-to-end process execution and the need for public-sector readiness are noted in the Agents of Change discussion and public-sector readiness remarks [S5][S9].
MAJOR DISCUSSION POINT
Evolution to Agentic AI
AGREED WITH
Minister Sridhar Babu, Lee Tiedrich, Mike Haley, Srinivas Tallapragada
Argument 2
AI‑generated RFPs and other public‑procurement tasks (Saibal Chakraborty)
EXPLANATION
Saibal raises the possibility of AI agents drafting multi‑million or billion‑dollar requests for proposals (RFPs) for governments, questioning how much autonomy can be granted and what guardrails are needed.
EVIDENCE
He imagines an agent crafting a multi-million or billion-dollar RFP on behalf of the government and asks whether such agents can be fully autonomous or require a final human check (lines [148-152]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The possibility of AI-generated procurement documents and the associated governance considerations are discussed in the public-sector readiness source [S9].
MAJOR DISCUSSION POINT
Applications of AI Agents in the Public Sector
AGREED WITH
Minister Sridhar Babu, Mike Haley, Srinivas Tallapragada
DISAGREED WITH
Mike Haley, Lee Tiedrich, Srinivas Tallapragada
Argument 3
Upskilling public‑sector staff to understand AI limits and maintain human oversight (Saibal Chakraborty)
EXPLANATION
Saibal emphasizes that beyond policy, the biggest guardrail is ensuring public‑sector employees are trained to recognize AI’s probabilistic nature, know what can be trusted, and apply human oversight where needed.
EVIDENCE
He notes that many public-sector users are not AI engineers and need upskilling to understand what can be trusted and what requires additional checks (lines [253-254]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for upskilling public officials and emphasizing human-in-the-loop oversight appear in workforce-building and human-layer discussions [S22][S21].
MAJOR DISCUSSION POINT
Governance, Trust, and Guardrails
AGREED WITH
Minister Sridhar Babu, Mike Haley
DISAGREED WITH
Lee Tiedrich, Srinivas Tallapragada, Minister Sridhar Babu
Argument 4
Success measured by vernacular AI tools delivering practical advice to farmers nationwide (Saibal Chakraborty)
EXPLANATION
Saibal proposes that a key success metric is a farmer being able to converse in their own language with a small language model that provides actionable advice, scaled across the country.
EVIDENCE
He says the true success of AI would be if a farmer could talk to a small language-model-powered tool in their vernacular and receive practical crop and cattle advice, and that this could be scaled nationwide (lines [334-335]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multilingual chatbot initiatives for farmers and broader AI-for-agriculture programs provide context for this success metric [S24][S25].
MAJOR DISCUSSION POINT
Future Vision and Success Metrics
L
Lee Tiedrich
4 arguments197 words per minute833 words252 seconds
Argument 1
Emergence of agentic AI that can act on behalf of people (Lee Tiedrich)
EXPLANATION
Lee highlights that the most significant change in AI is its ability not only to perform end‑to‑end tasks but also to act on behalf of individuals, marking a shift toward agency.
EVIDENCE
He references Professor Bengio’s comment that the biggest change is the emergence of agentic AI and notes its ability to act on behalf of people (lines [115-117]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emergence of agentic AI capable of acting on behalf of individuals is highlighted in the Agents of Change discussion and heterogeneous compute overview [S5][S15].
MAJOR DISCUSSION POINT
Evolution to Agentic AI
AGREED WITH
Minister Sridhar Babu, Saibal Chakraborty, Mike Haley, Srinivas Tallapragada
Argument 2
Guardrails to prevent over‑reliance, manage interoperability and liability of third‑party agents (Lee Tiedrich)
EXPLANATION
Lee warns that over‑reliance on AI can be risky and stresses the need for guardrails, especially concerning interoperability, liability, and testing of third‑party agents.
EVIDENCE
He discusses risks of over-reliance, the importance of guardrails, interoperability, liability allocation, and testing when agents call upon third-party agents (lines [190-203]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-in-the-loop safeguards and the role of regulatory sandboxes for managing third-party AI agents are discussed in the human-layer and regulatory sandbox sources [S21][S23].
MAJOR DISCUSSION POINT
Governance, Trust, and Guardrails
AGREED WITH
Saibal Chakraborty, Mike Haley, Srinivas Tallapragada
DISAGREED WITH
Minister Sridhar Babu, Mike Haley
Argument 3
Call for global, multi‑disciplinary standards and a shared evaluation ecosystem, with localized adaptations (Lee Tiedrich)
EXPLANATION
Lee calls for worldwide collaboration to develop AI standards and evaluation mechanisms, acknowledging differing regulatory cultures and the need for localized implementations.
EVIDENCE
He advocates for global, multi-disciplinary standards, shared evaluation ecosystems, and notes the contrast between US deregulatory and European precautionary approaches, emphasizing the need for common evaluation methods that can be localized (lines [260-279]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for worldwide AI safety institutes and shared evaluation ecosystems is described in the AI safety monitoring source [S18].
MAJOR DISCUSSION POINT
Standards, Regulation, and Agile Policy
AGREED WITH
Mike Haley, Victoria Espinel
DISAGREED WITH
Saibal Chakraborty, Srinivas Tallapragada, Minister Sridhar Babu
Argument 4
Establishment of active AI safety evaluation institutes sharing techniques globally (Lee Tiedrich)
EXPLANATION
Lee envisions active AI safety or security institutes (ACs) that exchange evaluation techniques worldwide, fostering collaboration between Global North and South.
EVIDENCE
He mentions that more countries are launching AI safety institutes, and predicts that in three years there will be an active AC institute sharing evaluation techniques and localizing them globally (lines [339-343]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Active AI safety institutes that exchange evaluation techniques globally are outlined in the AI safety monitoring discussion [S18].
MAJOR DISCUSSION POINT
Future Vision and Success Metrics
DISAGREED WITH
Srinivas Tallapragada, Minister Sridhar Babu
M
Mike Haley
6 arguments213 words per minute1516 words426 seconds
Argument 1
Transition from task‑specific to systems‑level AI agents (Mike Haley)
EXPLANATION
Mike observes that AI has moved from narrow, task‑specific agents to systems‑level agents capable of chain‑of‑thought reasoning and orchestrating multi‑agent workflows.
EVIDENCE
He notes the shift from narrow agents solving specific problems to agents that can abstract problems, reason, and execute sequenced actions at a systems level (lines [119-122]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The evolution from narrow, task-specific agents to systems-level, chain-of-thought agents is noted in the Agents of Change discussion [S5].
MAJOR DISCUSSION POINT
Evolution to Agentic AI
AGREED WITH
Minister Sridhar Babu, Saibal Chakraborty, Lee Tiedrich, Srinivas Tallapragada
Argument 2
AI agents assisting infrastructure design, floodplain analysis, and drainage optimization (Mike Haley)
EXPLANATION
Mike provides an example where AI agents analyze floodplains and optimize drainage early in the design process, demonstrating how agents can add value to civil‑engineering projects.
EVIDENCE
He describes AI agents that can analyze floodplains, suggest water-drainage strategies, and optimize designs to ensure successful drainage, noting the massive impact of such capabilities on infrastructure (lines [169-173]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven climate-resilient infrastructure design and floodplain analysis are mentioned in the Agents of Change and AI-for-agriculture sources [S5][S16].
MAJOR DISCUSSION POINT
Applications of AI Agents in the Public Sector
AGREED WITH
Minister Sridhar Babu, Srinivas Tallapragada, Saibal Chakraborty
Argument 3
Transparency “nutrition‑label” cards, human‑in‑the‑loop control, and acknowledging probabilistic nature (Mike Haley)
EXPLANATION
Mike explains that his company embeds transparency cards in AI features, detailing model, data, accuracy, and bias, and stresses that because AI is probabilistic, human oversight and feedback loops are essential for trust.
EVIDENCE
He details the “nutrition-label” cards that disclose model type, training data, accuracy, and bias (lines [301-304]), and adds that AI’s probabilistic nature requires human-in-the-loop control, feedback, and the ability to reassess results (lines [217-224]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-in-the-loop control, transparency, and probabilistic nature of AI are discussed in the human-layer and regulatory sandbox sources [S21][S23].
MAJOR DISCUSSION POINT
Governance, Trust, and Guardrails
AGREED WITH
Saibal Chakraborty, Lee Tiedrich, Srinivas Tallapragada
DISAGREED WITH
Minister Sridhar Babu, Lee Tiedrich
Argument 4
Industry‑led transparency standards (e.g., AI feature “nutrition‑label” cards) as proactive compliance (Mike Haley)
EXPLANATION
Mike highlights that providing standardized transparency information in AI products is an industry‑driven approach that can pre‑empt government requirements and become a de‑facto standard.
EVIDENCE
He notes that every AI feature now includes a transparency card similar to a nutrition label, describing model, data, accuracy, and bias, and that this practice is gaining interest as a possible standard (lines [301-304]).
MAJOR DISCUSSION POINT
Standards, Regulation, and Agile Policy
AGREED WITH
Lee Tiedrich, Victoria Espinel
Argument 5
Accelerated, safe infrastructure development with broad public trust and engagement (Mike Haley)
EXPLANATION
Mike envisions AI enabling faster infrastructure construction without compromising safety, while ensuring engineers and the public feel confident and engaged with the technology.
EVIDENCE
He states that in a few years infrastructure should be built faster than ever, without a black-box risk, and that the public, engineers, and officials should feel secure, engaged, and trust the AI-enabled processes (lines [345-353]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Building public trust through AI safety institutes and transparent evaluation is highlighted in the AI safety monitoring source [S18].
MAJOR DISCUSSION POINT
Future Vision and Success Metrics
Argument 6
Effective AI deployment for physical infrastructure requires a solid digital foundation, such as Building Information Modeling (BIM) and standardized data assets.
EXPLANATION
Mike argues that AI agents can only add value to infrastructure projects when accurate digital representations of assets exist; without BIM and consistent data, AI cannot be reliably applied.
EVIDENCE
He explains that “One of the really boring things but absolutely necessary… building information modelling… get standard data… if you get that in place now, all this AI goodness is way easier to deploy against it” (lines [238-244]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Foundational compute resources, heterogeneous computing, and the importance of standardized digital assets for AI deployment are discussed in the heterogeneous compute sources [S15][S4].
MAJOR DISCUSSION POINT
Infrastructure prerequisites for AI deployment
S
Srinivas Tallapragada
6 arguments171 words per minute1282 words449 seconds
Argument 1
Definition of an AI agent: role, knowledge, memory, actability, guardrails, trust layer (Srinivas Tallapragada)
EXPLANATION
Srinivas outlines the essential components of an AI agent: a defined role, knowledge base, short‑ and long‑term memory, ability to act via APIs across channels, and built‑in guardrails and a trust layer to mitigate hallucinations and bias.
EVIDENCE
He explains that an agent must know its role, possess knowledge, have short- and long-term memory, be able to act through APIs, operate across digital channels, and include guardrails and a trust layer to prevent hallucinations, bias, and toxicity (lines [133-144]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for guardrails, trust layers, and human oversight in AI agents is emphasized in the human-layer and regulatory sandbox discussions [S21][S23].
MAJOR DISCUSSION POINT
Evolution to Agentic AI
AGREED WITH
Minister Sridhar Babu, Saibal Chakraborty, Lee Tiedrich, Mike Haley
Argument 2
Command‑center architecture, auditability, and a trust layer for government AI deployments (Srinivas Tallapragada)
EXPLANATION
Srinivas stresses the need for a central command‑center that provides auditability, testing, and independent verification, establishing a trust layer that gives governments confidence in AI agents.
EVIDENCE
He mentions building a command centre where everything can be audited, tested, and even independently verified, which is required to make AI deployments a reality for customers or governments (lines [214-215]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Command-center auditability and trust mechanisms align with infrastructure provisioning and AI safety institute concepts [S15][S18].
MAJOR DISCUSSION POINT
Governance, Trust, and Guardrails
AGREED WITH
Saibal Chakraborty, Lee Tiedrich, Mike Haley
DISAGREED WITH
Saibal Chakraborty, Mike Haley, Lee Tiedrich
Argument 3
AI agents supporting police operations and disaster response (Bobby/Terry) (Srinivas Tallapragada)
EXPLANATION
Srinivas shares examples of AI agents deployed in public safety: “Bobby” in a UK city handling non‑emergency citizen queries, and “Terry” in Tasmania assisting over a thousand police officers in the field, illustrating AI’s role in disaster response and policing.
EVIDENCE
He describes the UK city’s agent “Bobby” answering over 90 % of non-emergency citizen questions, and Tasmania’s agent “Terry” supporting more than a thousand police officers with on-field queries (lines [185-188][189-191]).
MAJOR DISCUSSION POINT
Applications of AI Agents in the Public Sector
AGREED WITH
Minister Sridhar Babu, Mike Haley, Saibal Chakraborty
Argument 4
Distinction between strategic sovereignty (data and policy control) and technical sovereignty (full supply‑chain control) (Srinivas Tallapragada)
EXPLANATION
Srinivas differentiates strategic sovereignty—control over data, governance, and policy—which can be achieved now, from technical sovereignty—complete control over the hardware and supply chain—which requires longer‑term investment.
EVIDENCE
He explains that strategic sovereignty involves data and policy control that can be implemented immediately, while technical sovereignty concerns full supply-chain control from chips onward, urging governments to pursue both tracks (lines [246-251]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Strategic sovereignty through data platforms and sovereign AI centres is described in the Agents of Change discussion [S5][S15].
MAJOR DISCUSSION POINT
Sovereignty and Data Strategy
DISAGREED WITH
Lee Tiedrich, Minister Sridhar Babu
Argument 5
Advocacy for agile, feedback‑driven regulatory frameworks that can evolve with rapid AI advances (Srinivas Tallapragada)
EXPLANATION
Srinivas argues for regulatory frameworks that can be updated continuously through feedback loops, similar to agile product development, allowing policies to keep pace with fast‑moving AI technology.
EVIDENCE
He calls for policy frameworks that can be updated as new data emerges, likening it to product feedback loops and emphasizing the need for agile regulation (lines [317-325]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Agile regulation, feedback loops, and sandbox approaches are highlighted in the regulatory sandbox and AI safety monitoring sources [S23][S18].
MAJOR DISCUSSION POINT
Standards, Regulation, and Agile Policy
DISAGREED WITH
Lee Tiedrich, Saibal Chakraborty, Minister Sridhar Babu
Argument 6
Tangible uplift in income for the bottom 50 % of the population as a key impact indicator (Srinivas Tallapragada)
EXPLANATION
Srinivas proposes that a measurable increase in per‑capita income for the lowest half of the population within three years would be a concrete indicator of AI’s societal impact.
EVIDENCE
He states his hope that in three years the bottom 50 % income percentile will show measurable per-capita income growth, viewing this as the real impact of the technology (lines [355-356]).
MAJOR DISCUSSION POINT
Future Vision and Success Metrics
V
Victoria Espinel
1 argument155 words per minute1001 words387 seconds
Argument 1
The private sector needs clear, predictable, and consistent government expectations and regulatory guidance for AI agents.
EXPLANATION
Victoria emphasizes that companies developing AI solutions require certainty about what governments will require, so they can align product design, compliance, and deployment with public‑sector needs.
EVIDENCE
She states, “I think one of the things that I think is really important… industry is looking for clarity and predictability… it’s helpful to know what those government expectations are” (lines [281-288]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for clear, predictable regulatory guidance for industry is noted in the heterogeneous compute discussion and regulatory sandbox literature [S15][S23].
MAJOR DISCUSSION POINT
Standards, Regulation, and Agile Policy
Agreements
Agreement Points
Broad consensus that AI is moving from narrow, task‑specific tools to agentic, systems‑level agents that can act autonomously and serve as public infrastructure.
Speakers: Minister Sridhar Babu, Saibal Chakraborty, Lee Tiedrich, Mike Haley, Srinivas Tallapragada
Evolution to agentic AI as public infrastructure and co‑governor (Minister Sridhar Babu) Shift to agentic AI enabling end‑to‑end execution of processes (Saibal Chakraborty) Emergence of agentic AI that can act on behalf of people (Lee Tiedrich) Transition from task‑specific to systems‑level AI agents (Mike Haley) Definition of an AI agent: role, knowledge, memory, actability, guardrails, trust layer (Srinivas Tallapragada)
All speakers describe a paradigm shift from generative, query-based AI to agentic AI that can act, orchestrate workflows and function as a layer of public infrastructure. The minister notes “we are moving beyond generative AI … we are moving from them to agentic AI that acts now” and that “the search bar is dying” [21-24][33-35]; Saibal says the conversation has moved “decisively towards agentic AI … end-to-end AI-led execution” [110-113]; Lee points to “the emergence of agentic AI” as the biggest change [115-117]; Mike observes the move “from task specific to systems level” [119-122]; Srinivas stresses that “an agent … acts” as a core property [133-136].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with calls for practical standards for agentic AI and recognition of AI as critical public-service infrastructure, as highlighted in recent standards discussions and policy briefs [S55][S56][S53].
Universal agreement on the need for robust guardrails, transparency mechanisms and human‑in‑the‑loop control to build trust in AI agents.
Speakers: Saibal Chakraborty, Lee Tiedrich, Mike Haley, Srinivas Tallapragada
AI‑generated RFPs and other public‑procurement tasks (Saibal Chakraborty) Upskilling public‑sector staff to understand AI limits and maintain human oversight (Saibal Chakraborty) Guardrails to prevent over‑reliance, manage interoperability and liability of third‑party agents (Lee Tiedrich) Transparency “nutrition‑label” cards, human‑in‑the‑loop control, and acknowledging probabilistic nature (Mike Haley) Industry‑led transparency standards (e.g., AI feature “nutrition‑label” cards) as proactive compliance (Mike Haley) Command‑center architecture, auditability, and a trust layer for government AI deployments (Srinivas Tallapragada)
All four speakers stress that AI agents must be bounded by guardrails and transparent reporting to be trustworthy. Saibal raises guardrails around high-stakes RFP generation and stresses upskilling staff to recognise AI limits [148-152][253-254]; Lee warns of “over-reliance” and calls for guardrails on interoperability, liability and testing [190-203]; Mike highlights the probabilistic nature of AI, the need for human control and introduces “nutrition-label” transparency cards that disclose model, data, accuracy and bias [217-224][301-304]; Srinivas describes a command-center that provides auditability and independent testing as a trust layer [214-215].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on guardrails and human oversight reflects policy recommendations that go beyond regulation to include market-driven safeguards and warnings against overreliance on algorithms [S36][S47][S51].
Shared emphasis on capacity development and upskilling of public‑sector actors (and local communities) to realise AI benefits.
Speakers: Saibal Chakraborty, Minister Sridhar Babu, Mike Haley
Upskilling public‑sector staff to understand AI limits and maintain human oversight (Saibal Chakraborty) AI for flood prediction, agricultural advisory, climate‑resilient services (Minister Sridhar Babu) Effective AI deployment for physical infrastructure requires a solid digital foundation, such as Building Information Modeling (BIM) and standardized data assets. (Mike Haley)
The need to build skills and digital foundations is repeatedly highlighted. Saibal notes that “the biggest guardrail … is upskilling … so they know what can be trusted” [253-254]; the minister describes training farmers to “train the system with us” and incorporating local dialects and knowledge [48-53]; Mike stresses that AI can only add value once “building information modelling … standard data” are in place [238-244].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity gaps and the need for leadership and competence in government agencies have been repeatedly noted in OECD and other policy forums, underscoring the importance of upskilling for trustworthy AI deployment [S38][S56][S40].
Consensus that AI agents can deliver concrete public‑sector services such as flood prediction, agricultural advice, health risk forecasting and infrastructure optimisation.
Speakers: Minister Sridhar Babu, Mike Haley, Srinivas Tallapragada, Saibal Chakraborty
AI for flood prediction, agricultural advisory, climate‑resilient services (Minister Sridhar Babu) AI agents assisting infrastructure design, floodplain analysis, and drainage optimization (Mike Haley) AI agents supporting police operations and disaster response (Bobby/Terry) (Srinivas Tallapragada) AI‑generated RFPs and other public‑procurement tasks (Saibal Chakraborty)
All speakers cite real-world use cases. The minister details AI-driven flood forecasting, farmer advisors and health risk prediction [45-47][48-53][80-84]; Mike describes agents that “analyze floodplains … optimise drainage” [169-173]; Srinivas shares deployments of agents “Bobby” and “Terry” for police queries and disaster response [185-188][189-191]; Saibal imagines agents drafting multi-million-dollar RFPs to streamline procurement [148-152].
POLICY CONTEXT (KNOWLEDGE BASE)
Examples of AI-driven flood prediction, climate-resilient services, and health risk tools have been cited in sector-specific policy reports, confirming the practical service potential of AI agents [S35][S57][S39].
Agreement on the need for standardized, globally‑coordinated evaluation frameworks and industry‑government alignment on transparency and compliance.
Speakers: Lee Tiedrich, Mike Haley, Victoria Espinel
Call for global, multi‑disciplinary standards and a shared evaluation ecosystem, with localized adaptations (Lee Tiedrich) Industry‑led transparency standards (e.g., AI feature “nutrition‑label” cards) as proactive compliance (Mike Haley) The private sector needs clear, predictable, and consistent government expectations and regulatory guidance for AI agents. (Victoria Espinel)
Lee urges “global, multi-disciplinary standards” and a shared evaluation ecosystem [260-279]; Mike notes that his company now embeds “transparency cards” that could become a de-facto standard [301-304]; Victoria stresses that “industry is looking for clarity and predictability … it’s helpful to know what those government expectations are” [281-288].
POLICY CONTEXT (KNOWLEDGE BASE)
International multistakeholder efforts to develop AI standards and coordinated evaluation frameworks are documented in recent IGF and standards-body discussions, highlighting the push for global alignment [S53][S55][S41].
Similar Viewpoints
Both highlight that high‑stakes government uses of AI (e.g., drafting RFPs) require strong guardrails, testing, and clear liability frameworks to avoid adverse outcomes [148-152][190-203].
Speakers: Saibal Chakraborty, Lee Tiedrich
AI‑generated RFPs and other public‑procurement tasks (Saibal Chakraborty) Guardrails to prevent over‑reliance, manage interoperability and liability of third‑party agents (Lee Tiedrich)
Both stress that industry is already building transparency mechanisms and that governments should provide clear, predictable expectations to align with these emerging standards [301-304][281-288].
Speakers: Mike Haley, Victoria Espinel
Industry‑led transparency standards (e.g., AI feature “nutrition‑label” cards) as proactive compliance (Mike Haley) The private sector needs clear, predictable, and consistent government expectations and regulatory guidance for AI agents. (Victoria Espinel)
Both discuss data sovereignty: the minister announces a state‑run open data platform and AI nerve centre, while Srinivas differentiates strategic sovereignty (control over data and policies) as an immediate step toward full technical sovereignty [72-79][246-251].
Speakers: Minister Sridhar Babu, Srinivas Tallapragada
Creation of a sovereign AI nerve centre and an open data‑exchange platform for Telangana (Minister Sridhar Babu) Distinction between strategic sovereignty (data and policy control) and technical sovereignty (full supply‑chain control) (Srinivas Tallapragada)
Unexpected Consensus
Both public‑sector and private‑sector speakers propose concrete, measurable socioeconomic impact metrics focused on the most vulnerable populations.
Speakers: Saibal Chakraborty, Srinivas Tallapragada
Success measured by vernacular AI tools delivering practical advice to farmers nationwide (Saibal Chakraborty) Tangible uplift in income for the bottom 50 % of the population as a key impact indicator (Srinivas Tallapragada)
While Saibal frames success as a farmer being able to converse in his/her own language with a small language model for advice, Srinivas envisions a measurable rise in per-capita income for the lowest half of the population. Both converge on the idea that the ultimate proof of AI’s value will be tangible benefits for marginalized groups, a point not explicitly linked earlier in the discussion [334-335][355-356].
POLICY CONTEXT (KNOWLEDGE BASE)
The focus on impact metrics for vulnerable groups mirrors SDG-oriented policy guidance and recent analyses of AI’s role in inclusive development and health outcomes [S48][S42].
Overall Assessment

The panel shows strong, cross‑sectoral consensus that AI is transitioning to an agentic paradigm, that robust guardrails, transparency and human oversight are essential, that capacity building and data sovereignty are prerequisites, and that concrete public‑sector applications and measurable impact metrics are the yardsticks of success.

High consensus – the alignment across government, academia and industry suggests a shared roadmap for deploying AI agents responsibly, which bodes well for coordinated policy, standards development and investment in the coming years.

Differences
Different Viewpoints
Extent of autonomy for AI agents in public‑sector tasks such as RFP generation
Speakers: Saibal Chakraborty, Mike Haley, Lee Tiedrich, Srinivas Tallapragada
AI‑generated RFPs and other public‑procurement tasks (Saibal Chakraborty) Transparency “nutrition‑label” cards, human‑in‑the‑loop control, and acknowledging probabilistic nature (Mike Haley) Guardrails to prevent over‑reliance, manage interoperability and liability of third‑party agents (Lee Tiedrich) Command‑center architecture, auditability, and a trust layer for government AI deployments (Srinivas Tallapragada)
Saibal warns that fully autonomous AI agents drafting multi-million-dollar RFPs may be risky and suggests a final human check is needed [148-152][153-154]. Mike stresses that because AI is probabilistic, humans must retain control and be able to reassess results, proposing transparency cards as a guardrail [217-224]. Lee emphasizes the need for guardrails, testing, and liability frameworks when agents act on behalf of governments [190-203]. Srinivas adds that a central command-center with auditability and independent testing is required to build confidence [214-215]. The speakers therefore disagree on how much autonomy can be granted without compromising trust.
POLICY CONTEXT (KNOWLEDGE BASE)
Debate over AI autonomy versus human control is reflected in policy cautions about overreliance and discussions of governance structures for AI-driven decision-making [S47][S50][S55].
Approach to standards and regulation for AI agents
Speakers: Lee Tiedrich, Saibal Chakraborty, Srinivas Tallapragada, Minister Sridhar Babu
Call for global, multi‑disciplinary standards and a shared evaluation ecosystem, with localized adaptations (Lee Tiedrich) Upskilling public‑sector staff to understand AI limits and maintain human oversight (Saibal Chakraborty) Advocacy for agile, feedback‑driven regulatory frameworks that can evolve with rapid AI advances (Srinivas Tallapragada) Creation of a sovereign AI nerve centre and an open data‑exchange platform for Telangana (Minister Sridhar Babu)
Lee calls for worldwide, multi-disciplinary standards and a shared evaluation ecosystem that can be localized [260-279]. Saibal argues that the biggest guardrail is upskilling public-sector staff to understand AI’s probabilistic nature [253-254]. Srinivas proposes agile, feedback-driven regulation that can be updated as technology evolves [317-325]. The Minister promotes a state-run sovereign AI nerve centre and open data platform as the primary governance mechanism [72-79]. These positions reflect a disagreement on whether global standards, capacity-building, agile policy, or state-centric infrastructure should lead AI governance.
POLICY CONTEXT (KNOWLEDGE BASE)
Differing views on regulatory philosophy and standards development are evident in recent European tech-sovereignty debates and multistakeholder standard-setting processes [S55][S53][S45].
Strategic sovereignty and data openness versus global sharing and evaluation institutes
Speakers: Srinivas Tallapragada, Lee Tiedrich, Minister Sridhar Babu
Distinction between strategic sovereignty (data and policy control) and technical sovereignty (full supply‑chain control) (Srinivas Tallapragada) Establishment of active AI safety evaluation institutes sharing techniques globally (Lee Tiedrich) Creation of a sovereign AI nerve centre and an open data‑exchange platform for Telangana (Minister Sridhar Babu)
Srinivas differentiates strategic sovereignty-control over data and policy that can be achieved now-from technical sovereignty-full hardware supply-chain control that requires longer-term investment [246-251]. Lee envisions active AI safety institutes that exchange evaluation techniques worldwide, promoting shared standards and localization [339-343]. The Minister highlights Telangana’s sovereign AI nerve centre and an open data-exchange platform that makes thousands of datasets publicly available [72-79][74-77]. The speakers disagree on the balance between national data sovereignty and participation in global collaborative evaluation frameworks.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between national data sovereignty and global collaboration has been a recurring theme in AI policy forums, especially in discussions on digital public infrastructure and strategic autonomy [S43][S44][S45][S49].
AI as a co‑governor versus AI as a tool requiring human oversight
Speakers: Minister Sridhar Babu, Mike Haley, Lee Tiedrich
Creation of a sovereign AI nerve centre and an open data‑exchange platform for Telangana (Minister Sridhar Babu) Transparency “nutrition‑label” cards, human‑in‑the‑loop control, and acknowledging probabilistic nature (Mike Haley) Guardrails to prevent over‑reliance, manage interoperability and liability of third‑party agents (Lee Tiedrich)
The Minister describes AI agents as “co-governors” that can predict floods and allocate resources before citizens request services, positioning AI as an autonomous partner in governance [45-47]. Mike argues that AI must include transparency cards and human-in-the-loop control because AI systems are inherently probabilistic and need oversight [217-224]. Lee stresses the necessity of guardrails, testing, and human involvement to avoid over-reliance on AI agents [190-203]. This reflects a disagreement on whether AI should function as an autonomous co-governor or remain a tool under human supervision.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy literature highlights the trade-off between AI-driven governance functions and the need for human oversight to prevent bias and maintain accountability [S47][S50][S51].
Unexpected Differences
Confidence in AI’s autonomous predictive capability versus caution about over‑reliance
Speakers: Minister Sridhar Babu, Saibal Chakraborty
Creation of a sovereign AI nerve centre and an open data‑exchange platform for Telangana (Minister Sridhar Babu) AI‑generated RFPs and other public‑procurement tasks (Saibal Chakraborty)
The Minister confidently claims that AI agents can predict floods before clouds gather and allocate resources proactively, portraying AI as a reliable co-governor [45-47]. Saibal, however, cautions that granting AI full autonomy-such as drafting multi-million-dollar RFPs-poses high-risk stakes and insists on a final human check [148-152][153-154]. The stark contrast between the Minister’s optimism about autonomous AI and Saibal’s precautionary stance was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Cautious stances stem from documented incidents of AI inaccuracy and broader warnings about overdependence on algorithmic predictions in critical domains [S47][S52][S48].
Industry’s proactive transparency standards versus public‑sector emphasis on upskilling and evolving standards
Speakers: Mike Haley, Saibal Chakraborty
Transparency “nutrition‑label” cards, human‑in‑the‑loop control, and acknowledging probabilistic nature (Mike Haley) Upskilling public‑sector staff to understand AI limits and maintain human oversight (Saibal Chakraborty)
Mike describes a proactive industry approach where every AI feature includes a standardized transparency card that discloses model, data, accuracy, and bias, positioning this as a de-facto standard [301-304]. Saibal argues that the biggest guardrail is upskilling public-sector users to understand AI’s probabilistic nature and know when additional human checks are required [253-254]. The divergence-industry setting its own standards versus public-sector focusing on capacity building-was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
The contrast between industry-led transparency initiatives and public-sector calls for capacity building reflects ongoing discussions about the balance of private-sector agility and governmental responsibility for standards evolution [S36][S56][S55][S41].
Overall Assessment

The panel shows substantial disagreement on how much autonomy AI agents should have in government functions, the optimal path to standards and regulation, the balance between national data sovereignty and global collaboration, and whether AI should act as a co‑governor or remain a tool under human oversight. While all participants agree on the need for trustworthy, beneficial AI, they diverge on mechanisms—ranging from global standards, industry transparency cards, upskilling, agile regulation, to sovereign data platforms.

Moderate to high disagreement, reflecting differing priorities (innovation speed, national sovereignty, risk management) that could slow consensus on policy and implementation unless coordinated frameworks are established.

Partial Agreements
All speakers agree that AI must be deployed safely and with public trust. Lee proposes global standards and evaluation mechanisms [260-279]; Mike suggests industry‑driven transparency cards to inform users about model, data, and bias [301-304]; Saibal calls for extensive upskilling of public‑sector personnel to recognize AI limits [253-254]; Srinivas recommends agile, feedback‑driven regulation that can be updated as technology evolves [317-325]. The disagreement lies in the primary mechanism to achieve the shared goal.
Speakers: Lee Tiedrich, Mike Haley, Saibal Chakraborty, Srinivas Tallapragada
Call for global, multi‑disciplinary standards and a shared evaluation ecosystem, with localized adaptations (Lee Tiedrich) Transparency “nutrition‑label” cards, human‑in‑the‑loop control, and acknowledging probabilistic nature (Mike Haley) Upskilling public‑sector staff to understand AI limits and maintain human oversight (Saibal Chakraborty) Advocacy for agile, feedback‑driven regulatory frameworks that can evolve with rapid AI advances (Srinivas Tallapragada)
Both speakers aim to improve agricultural outcomes for farmers through AI. The Minister describes state‑run AI advisors trained with farmers’ dialects and lived knowledge to provide timely advice [48-53][58-62]. Saibal envisions a scalable vernacular language model that gives practical crop and cattle advice to farmers across India [334-335]. They agree on the goal of farmer empowerment but differ on the implementation pathway—state‑driven platforms versus scalable language‑model services.
Speakers: Minister Sridhar Babu, Saibal Chakraborty
Creation of a sovereign AI nerve centre and an open data‑exchange platform for Telangana (Minister Sridhar Babu) Success measured by vernacular AI tools delivering practical advice to farmers nationwide (Saibal Chakraborty)
Takeaways
Key takeaways
AI is moving from narrow, query‑based tools to agentic systems that can act autonomously and execute end‑to‑end processes, positioning AI as a form of public infrastructure and a co‑governor. An AI agent must have a defined role, knowledge, memory (short‑ and long‑term), actability (via APIs or channels), and built‑in guardrails and a trust layer to be reliable for government use. Practical public‑sector applications are already being piloted: flood prediction, agricultural advisories, climate‑resilient services, infrastructure design (e.g., drainage optimisation), police assistance (Bobby/Terry agents), and automated procurement tasks such as RFP generation. Governance requires a command‑center architecture, auditability, transparency, and human‑in‑the‑loop controls to mitigate hallucinations, bias, and other risks inherent in probabilistic AI systems. Sovereignty is framed in two layers: strategic (control of data, policies, and operational rules) and technical (full control of the hardware and supply chain). Telangana’s sovereign AI nerve centre and open data‑exchange platform exemplify the strategic approach. Standardisation and regulation must be multi‑disciplinary, globally coordinated yet locally adaptable; an evaluation ecosystem and agile, feedback‑driven policy frameworks are essential to keep pace with rapid AI advances. Industry can help by pre‑emptively providing transparency “nutrition‑label” cards for AI features and by collaborating with governments on guardrails and standards. Success metrics envisioned include vernacular AI tools delivering actionable advice to farmers, active AI safety evaluation institutes sharing methods globally, faster yet safe infrastructure delivery, and measurable income uplift for the bottom‑50 % of the population.
Resolutions and action items
Create/strengthen a government‑level AI command centre with auditability and a trust layer (as advocated by Sridhar Babu and Srinivas Tallapragada). Deploy pilot AI advisors with farmers to embed local dialects and lived knowledge into models (Telangana initiative). Implement the Telangana sovereign AI nerve centre (ICOM) and open data‑exchange platform to ensure strategic data sovereignty. Adopt industry‑led transparency cards (nutrition‑label style) for all AI‑enabled features to satisfy emerging regulatory expectations. Upskill public‑sector staff at district, state, and national levels on AI limits, trustworthiness, and human‑in‑the‑loop oversight (Saibal Chakraborty). Establish an agile regulatory feedback loop that allows standards and policies to be updated iteratively as AI capabilities evolve (Srinivas Tallapragada). Encourage global collaboration among standards bodies, AI safety institutes, and governments to build a shared evaluation ecosystem with localized adaptations (Lee Tiedrich). Ensure digital infrastructure (e.g., BIM, standardized data models) is in place before scaling AI agents for large‑scale infrastructure projects (Mike Haley).
Unresolved issues
Defining the precise level of human oversight required for high‑stakes autonomous actions such as AI‑generated RFPs or disaster‑response decisions. Establishing liability frameworks and interoperability standards for third‑party agents that interact within government systems. Balancing strategic data sovereignty with the longer‑term goal of full technical sovereignty without stalling immediate AI benefits. Creating inclusive vernacular AI tools for millions of farmers across diverse languages and ensuring scalability. Harmonising divergent regulatory philosophies (e.g., U.S. deregulatory vs. EU precautionary approaches) into workable standards for the Global South. Developing concrete, universally accepted metrics for AI‑driven income uplift and other socio‑economic outcomes.
Suggested compromises
Adopt a human‑in‑the‑loop model for critical tasks while progressively increasing agent autonomy as trust and guardrails mature. Pursue a two‑track sovereignty strategy: implement immediate strategic data‑control measures while planning a longer‑term technical sovereignty roadmap. Apply a crawl‑walk‑run deployment approach: start with simple, low‑risk AI agents, iterate, and scale to more complex, high‑impact use cases. Use agile, feedback‑driven regulatory frameworks that allow standards to be revised quickly rather than requiring perfect rules at launch. Provide transparency cards as a middle ground between industry self‑disclosure and regulator‑mandated documentation.
Thought Provoking Comments
We are moving beyond generative AI that simply answers. We are moving from them to agentic AI that acts now… AI as a co‑governor that can predict floods, advise farmers, and serve as a public infrastructure.
Sets a bold, systemic vision of AI not just as a tool but as an integral part of governance and public infrastructure, framing AI as a sovereign, societal asset.
Established the overarching theme of the panel, prompting other speakers to discuss concrete implementations (e.g., flood prediction, farmer advisors) and to consider governance, sovereignty, and trust layers.
Speaker: Minister Sridhar Babu
The conversation has moved decisively towards agentic AI. We are no longer talking about solving discrete problems or discrete searches; we are now looking at end‑to‑end AI‑led execution of business processes or government processes.
Clearly articulates the paradigm shift from narrow, query‑based AI to autonomous agents that can execute whole workflows, sharpening the focus of the discussion.
Shifted the dialogue from abstract hype to practical concerns about workflow automation, leading to deeper talks on guardrails, human‑in‑the‑loop, and policy implications.
Speaker: Saibal Chakraborty
An agent must have a role, knowledge (short‑term and long‑term memory), the ability to act via APIs, and a trust layer with guardrails; otherwise it can hallucinate, be biased, or become unpredictable.
Provides a concise, technical taxonomy of what constitutes a trustworthy AI agent, moving the conversation from buzzwords to concrete design requirements.
Prompted the panel to explore how to embed guardrails, auditability, and transparency into deployments, influencing later comments on upskilling, transparency cards, and agile regulation.
Speaker: Srinivas Tallapragada
Guardrails are not a perfect shield; AI systems are inherently probabilistic. Trust comes from transparency and giving humans the ability to intervene, not from guaranteeing flawless outputs.
Challenges the misconception that safety can be achieved solely through static controls, emphasizing human agency and iterative feedback as core to trust.
Reoriented the discussion toward dynamic governance models, supporting Lee’s points on sandboxes and Saibal’s call for upskilling operators.
Speaker: Mike Haley
Strategic sovereignty (control over data, policies, and human‑in‑the‑loop) can be achieved now, while technical sovereignty (full control of the supply chain) is a longer‑term goal. Don’t let the second track stall the first.
Introduces a nuanced two‑track approach to AI sovereignty, separating immediate, actionable steps from longer‑term ambitions, which is especially relevant for developing nations.
Guided the conversation toward actionable policy steps, influencing later remarks on agile regulation and the need for flexible standards.
Speaker: Srinivas Tallapragada
One of the biggest guardrails beyond policies is upskilling the people who actually use the tools. They need to know what can be trusted and where a human check is required.
Highlights the human capacity gap as a critical risk factor, shifting focus from technical safeguards to workforce development.
Led to consensus on the importance of training, reinforced by later comments on transparency cards and the need for a feedback loop between users and systems.
Speaker: Saibal Chakraborty
We’ve introduced a ‘transparency card’—a nutrition‑label‑style disclosure of model type, training data, accuracy, and known biases—for every AI feature. This is becoming a de‑facto standard.
Offers a concrete, industry‑level solution to the demand for explainability and regulatory clarity, bridging the gap between tech providers and governments.
Provided a tangible example of how companies can pre‑empt regulation, influencing the discussion on how governments can set expectations and how industry can meet them.
Speaker: Mike Haley
Over‑reliance should be mitigated by picking use‑cases wisely, using sandboxes for immature applications, and thinking through interoperability and liability when agents call third‑party agents.
Integrates safety, risk management, and ecosystem considerations into a single framework, deepening the conversation about systemic risks.
Steered the panel toward a more holistic view of AI governance, prompting further dialogue on evaluation standards and the need for international coordination.
Speaker: Lee Tiedrich
Regulation should be agile—policy frameworks must allow standards to be updated as the technology evolves, mirroring product feedback loops in engineering.
Proposes a dynamic regulatory model that aligns with the rapid pace of AI development, challenging static, one‑off legislative approaches.
Shifted the tone from fear of lagging regulation to proactive, iterative policy design, resonating with earlier calls for guardrails and upskilling.
Speaker: Srinivas Tallapragada
Success will be measured when the bottom 50 % income percentile sees a measurable rise in per‑capita income, indicating AI’s impact on inclusive economic growth.
Sets a concrete, equity‑focused metric for AI’s societal benefit, moving the conversation from technical success to real‑world outcomes.
Closed the panel with a clear, human‑centered goal, reinforcing earlier themes of inclusivity (Saibal’s farmer example) and prompting the audience to think about impact measurement.
Speaker: Srinivas Tallapragada (final question)
Overall Assessment

The discussion pivoted around a central narrative introduced by the Minister: AI as a public‑infrastructure co‑governor. Subsequent comments from Saibal, Srinivas, Lee, and Mike each added layers—defining the shift to agentic AI, outlining the technical components of trustworthy agents, emphasizing human oversight, and proposing concrete governance mechanisms (upskilling, transparency cards, agile regulation). These insights created a cascade: the vision sparked practical design concerns, which then generated solutions and policy frameworks, culminating in a shared metric of inclusive prosperity. Collectively, the highlighted remarks steered the panel from high‑level optimism to actionable, equity‑oriented pathways for deploying AI agents in government.

Follow-up Questions
What guardrails should be put around AI agents that generate public procurement documents, and to what extent should human oversight remain?
High‑stakes government contracts require safeguards to prevent errors, ensure accountability, and maintain public trust.
Speaker: Saibal Chakraborty
How can interoperability between multiple AI agents be managed, including liability allocation and testing of composed systems?
As agents call on third‑party agents, clear mechanisms are needed to ensure safe interactions and assign responsibility.
Speaker: Lee Tiedrich
What digital infrastructure (e.g., BIM, standard data models) is required before AI agents can be effectively applied to physical infrastructure design?
Accurate digital representations are essential for AI tools to reliably design, analyze, and manage infrastructure projects.
Speaker: Mike Haley
What design of a command‑center or audit framework is needed to ensure transparency, testing, and confidence in government‑deployed AI agents?
A governance layer that allows independent testing and auditing builds trust for officials and citizens.
Speaker: Srinivas Tallapragada
What upskilling programs are needed for government officials at district and state levels to understand AI trustworthiness and appropriate human‑in‑the‑loop?
Non‑technical decision‑makers must be able to interpret AI outputs and know when human intervention is required to avoid misuse.
Speaker: Saibal Chakraborty
How can global AI safety and performance standards be developed that are adaptable to local regulatory contexts?
Consistent benchmarks enable alignment across jurisdictions while allowing localization for cultural and legal differences.
Speaker: Lee Tiedrich
How should governments balance strategic data sovereignty (control over data and policies) with technical sovereignty (control over hardware supply chains) in AI adoption?
Balancing immediate benefits of data control with longer‑term national security concerns over hardware is crucial for sustainable AI strategy.
Speaker: Srinivas Tallapragada
What mechanisms can enable agile regulation that updates AI standards quickly as technology evolves?
Fixed standards risk becoming obsolete; agile regulatory processes allow timely adaptation while maintaining safety.
Speaker: Srinivas Tallapragada
What clear and predictable AI governance expectations should governments provide to industry to reduce compliance complexity?
Industry needs stable, transparent requirements to design compliant products and foster collaboration with regulators.
Speaker: Mike Haley
How can AI agents be designed to function as co‑governors for disaster response, providing early warnings and resource allocation?
AI could improve response times and save lives, but requires validation, trust frameworks, and integration with emergency services.
Speaker: Srinivas Tallapragada
What research is needed to assess the impact of vernacular language AI tools for farmers on agricultural productivity and inclusion?
Demonstrating AI’s benefit to smallholder farmers in local languages is a key measure of inclusive AI success.
Speaker: Saibal Chakraborty
How effective are AI‑driven health risk prediction systems, and how can they be integrated into public health workflows?
Early health risk detection could improve outcomes, but needs evaluation of accuracy, integration, and impact on services.
Speaker: Minister Sridhar Babu
What is the scalability and reliability of solar‑powered edge computing nodes for maintaining services during grid failures?
Ensuring critical services remain operational during power outages is vital for resilient governance.
Speaker: Minister Sridhar Babu
How can AI‑based climate zoning and urban cooling strategies be evaluated for achieving Hyderabad’s net‑zero objectives?
Assessing the effectiveness of AI‑informed green belts and cooling measures is essential for sustainable city planning.
Speaker: Minister Sridhar Babu

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.