From Innovation to Impact_ Bringing AI to the Public

From Innovation to Impact_ Bringing AI to the Public

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed how artificial intelligence can drive India’s economic expansion and position the country as a global AI-dominant nation, noting that AI adoption will raise individual productivity, allowing small entrepreneurs to scale multiple businesses and thereby boost GDP growth [2-5][9-12][13]. Sharma emphasized that India must move beyond a services-only economy by creating its own foundation models in English and Hindi, asserting this is a non-negotiable national priority and that sufficient talent and resources exist to develop multiple domestic models to demonstrate Indian capability [34-38][44-49][53-55].


The conversation highlighted vertical AI applications, beginning with finance where large language models can eliminate hidden biases in loan decisions and improve credit access for low-income users, and extending to agriculture, livestock health, and pollution monitoring, illustrating the breadth of sector-specific opportunities [98-104][108-119][85-89]. When asked whether banks and schools would become obsolete, Sharma responded that their core functions-credit provision and social learning-remain essential, though delivery will shift to AI-enhanced digital interfaces [132-151][152-160].


He warned that existing international models embed Western-centric data, creating bias, and therefore an Indian-trained model is required to preserve cultural and historical knowledge [199-210]. Sharma introduced the concept of “agent-first” interfaces, where AI agents communicate with each other and with services such as Uber, reducing the need for human log-ins [217-224]. He claimed AI is an inclusive technology that can narrow the rich-poor gap by giving everyone access to powerful tools in their native language [339-342].


While acknowledging risks, he argued that AI’s low computational requirements and cloud-based delivery make it broadly safe and distributable, exemplified by a small “AI sound box” [353-368]. To ensure diffusion, Sharma said the focus should be on small merchants, providing them with AI advice on inventory and financing, thereby extending AI benefits beyond large enterprises [389-401]. Addressing concerns about education, he suggested that students-regardless of tier-3 or tier-4 backgrounds-should leverage AI to augment curiosity and acquire skills, making AI a leveler in the job market [505-515].


Overall, the panel concluded that building indigenous AI models and embedding them across sectors will catalyze inclusive growth and transform India’s economic and social landscape [34-38].


Keypoints


Major discussion points


AI as a catalyst for India’s next economic leap – Sharma argues that embracing AI will boost productivity, enable small merchants to scale, and drive a “bull-case” growth of the Indian economy from the current $2-3 trillion to an additional $2 trillion in the next decade [1-5][9-12][13-21].


Building indigenous foundation models – He stresses that India must create its own large-language and foundation models to move up the value chain, eliminate cultural bias, and retain historical knowledge, citing the launch of Sarvam’s model as a proof-point and calling for many such models [30-38][34-49][50-56].


Sector-specific transformation and AI-first agents – The conversation details how AI can remove bias in finance, enable personalized wealth advice for low-income users, improve healthcare monitoring, and empower education and agriculture; all through “agent-first” interfaces that act on behalf of users [98-124][132-151][170-186][221-236].


Risks, bias, and the need for inclusive distribution – While optimistic, the panel acknowledges risks of bias in training data, the danger of over-centralising power, and the importance of regulatory sandboxes and low-cost access (e.g., AI sound-box) to keep AI a public good [339-350][353-368][389-401][214-216].


Guidance for individuals and small businesses – Sharma urges students, especially from tier-3/4 regions, to adopt an AI-first workflow, leverage curiosity, and use AI as a “super-power” to stay relevant; similarly, small merchants should receive AI tools tailored to their daily decisions [261-290][473-511].


Overall purpose / goal


The discussion aims to persuade stakeholders-policy makers, entrepreneurs, educators, and the broader public-that India must proactively develop its own AI capabilities, deploy them across key sectors, and ensure inclusive access so that AI becomes a lever for massive economic growth, social inclusion, and a shift from a services-only economy to a high-value, AI-driven one.


Overall tone


The tone is largely enthusiastic and visionary, celebrating AI’s potential and India’s “lucky moment” [12-13]. Mid-conversation it becomes cautiously reflective, addressing bias, regulation, and distribution challenges [339-350][353-368]. Toward the end it shifts to a motivational, advisory tone, offering concrete advice to students and small businesses and reaffirming confidence in AI as an equaliser [261-290][473-511]. The progression moves from optimism, through measured caution, back to an empowering call-to-action.


Speakers

Vijay Shekhar Sharma – Founder & CEO of Paytm; expertise in fintech, digital payments, AI, entrepreneurship and digital sovereignty. [S4][S5][S6]


Harinder Takhar – Speaker / panelist on AI and public impact; specific title not detailed in sources. [S7][S8]


Audience – General participants; includes individuals such as Professor Charu (public administration) and Dr. Nazar, representing varied expertise. [S1][S2][S3]


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

The panel opened by positioning artificial intelligence as the primary engine for India’s next phase of economic expansion. Sharma notes that India’s economy is currently ≈ 2.5-3.5 trillion USD and that AI could add roughly 2 trillion USD over the next 7-10 years in a bull-case scenario [1-5][9-12][13-21][add appropriate citation]. He describes this moment as “lucky” because the productivity boost from AI is essential for the country’s growth trajectory [ S8][S21][S59].


When Harinder Takhar asked whether India should also build its own chips and data-centres, Sharma replied that the immediate priority is to develop indigenous foundation models; hardware can follow later [30-38][add appropriate citation].


Sharma then argued that building large-language models in English and Hindi is a national imperative for moving up the value chain and shedding the “services-only” identity [30-38][34-49]. He cited the recent launch of Sarvam’s foundation model as proof that Indian teams can deliver world-class systems [46-48]. Indigenous models, he said, can embed Indian cultural and historical knowledge and avoid the Western-centric bias of most international models [199-213][50-56].


Illustrating the scale of private commitment, Sharma mentioned Paytm’s historic investment of ≈ 25,000 crore on the QR-code ecosystem, underscoring the willingness of Indian firms to fund large-scale AI infrastructure [add appropriate citation].


The discussion moved to concrete, sector-specific applications. In finance, Sharma showed how AI-driven loan-screening can eliminate both known and hidden biases, offering a more equitable credit decision process [98-104]. He extended this to personalised wealth advice for low-income users – for example, an auto-rickshaw driver with ₹2-5 lakh of savings could receive AI-generated recommendations on fixed deposits, sovereign gold funds or index funds in his native language [108-119]. A personal health anecdote described how Sharma used AI to adjust his mother’s medication schedule, demonstrating AI’s potential in medical-dose timing [add appropriate citation]. Additional vertical opportunities were identified in agriculture (real-time data for farmers), livestock health (early disease detection), and environmental monitoring (pollution tracking) [85-89][170-186].


Sharma introduced the “agent-first” paradigm as the future of digital services. He likened AI agents to engines that can power a range of vehicles and explained that agents will converse directly with each other – for instance, an Uber-agent negotiating rides without a human login – rather than requiring a human to log in each time [217-224][221-236]. He urged developers to move away from icon-based designs toward non-icon, agent-centric applications, arguing that this will unlock new business models [255-259][add appropriate citation].


When participants asked whether traditional institutions would become obsolete, Sharma responded that the core functions of banks and schools will persist. Banking will continue to provide regulated deposit safety and credit creation, even if the customer-facing layer shifts to AI-enhanced chat-bots and mobile interfaces [138-151]. Schools, he argued, remain vital for social interaction, networking and the broader educational experience, though their pedagogical delivery will be augmented by AI tools [152-160][162-166].


Addressing inequality, Sharma claimed that AI will act like a “super-car” anyone can ride, thereby reducing the gap between rich and poor [add appropriate citation]. He warned that granting AI full autonomous control over payment accounts would be dangerous and must be avoided [427-433]; he likened AI risk to driving a car – you must “check left and right” before proceeding [add appropriate citation].


To ensure equitable diffusion, Sharma proposed treating AI as a public good delivered through low-cost terminals such as the compact “AI sound box,” which provides sophisticated capabilities without heavy local compute [353-368][214-216]. He emphasized focusing on small and micro merchants, giving them AI advice on inventory, pricing and financing [389-401]. Regulators, he noted, are offering sandbox programmes that allow data sharing for model training while safeguarding privacy [add appropriate citation].


In the audience Q&A, a question about the brokerage industry’s future elicited Sharma’s answer that agent-first interfaces will become the norm, making brokerage services AI-native rather than merely AI-enhanced [add appropriate citation]. He also encouraged students from tier-3 and tier-4 towns to adopt an “AI-first workflow” – using AI tools for every task before writing code or performing manual work – to become super-productive and stay relevant [261-290][473-511][505-515].


In conclusion, the speakers converged on the view that AI can be a catalyst for massive productivity gains, inclusive financial and social services, and a transition to a higher-value economy, provided that India builds indigenous foundation models, adopts agent-first interfaces, and distributes low-cost AI tools to small enterprises and underserved students. The discussion underscored the need for policy frameworks that support sustainable business models, protect against bias and over-centralisation, and preserve the core roles of legacy institutions while allowing AI-driven augmentation [34-38][64-66][138-151][339-350][389-401].


Session transcriptComplete transcript of the session
Vijay Shekhar Sharma

that can take you farther and farther ahead in the world economy. So I see it not as a job reduction. I see it as opportunity for India to create a global AI -dominant nation. And that does not mean that that is easy to do. But at the same point of time, most of us have to take a will, and we should be taking the commitment towards leveraging AI first. For example, like if you have a smartphone now, even a small shopkeeper does not believe he does not have a smartphone. They get payment from Paytm and work a lot of life. They assume that there is a smartphone, which is fair. Now, if you believe that you can build a business model, or entrepreneurs will build a business model as a customer, you use AI first products, the productivity will be dramatically higher.

So a person who was running a shop can run multiple shops. So GDP growth will automatically come in a different size because the productivity per person will be higher. And that is the lacking thing of India. the problem will be that will India be that bigger market or not and I want to tell you this is a very good lucky moment that we all are sitting it is tough to grow from 20 billion to 25 billion anyway that is 50 % growth and 25 % growth is very tough but luckily India is right now living in 2 .5, 3, 3 .5 trillion economy and I am comfortably going to say that 2 trillion dollar can be compoundingly growing in next 7, 10 years so if you are sitting in this economy in 2026 and you believe that next 2 trillion is going to come in next 10 years so I don’t have to tell you out of 2 trillion dollars many of you can get many of million dollars or billion dollars entities or business benefit to yourself so that’s why the India is a bull case scenario and AI on top of it is the absolute bull scenario I don’t think there is any challenge to that someone asked me this question in some other audience so I said like in Ramcharitmaras if Pran goes but Bachar doesn’t go similarly if Job goes but AI doesn’t go Well, job jai hoga nahi, kyunki people will have method to discover themselves to do more productive job.

So, in my opinion, the traditional kind of work, main job ko yaha pe metaphorically use kar raha hoon, traditional kind of work versus the AI kind of work. And so, that will happen. Main completely maata hoon, main isko apne tarike se express karna chahta hoon. Humare kisi bhi system ko banane ke liye, koi bhi job karne ke liye company ke andar ya kahin pe bhi, kahin na kahin koi bottleneck hota hai. Aur ab increasingly, ap log ye figure out karoge, ki wo wala bottleneck pe eliminate ho jata hai. Iska basically matlab hai ki sadak chodi ho gayi. Problem kahin aur shift ho gayi hai. Problems disappear nahi hoti hai, wo bas move kar jati hai kisi aur jaga ke upar.

Outer ring road ke road block se bhi hume yahi bata chalta hai. Ke road, sae pehle Nehru place pe ruka karta tha jam, ab jaake wo airport ke paas lagta hai.

Harinder Takhar

Very good. Main samajh gaya. Achha sab na aya. Okay. Toh ek aur related question tha, ki opportunity. is a lot. And when we look at the exhibition hall, someone is making a chip, someone is making a data center, someone is making a foundation model. So should we all make a foundation model in India? Should we make our own chips? Should we make our own applications?

Vijay Shekhar Sharma

Very good. So first I want to say one thing. To make a foundation model, English, we’ll speak English and Hindi, okay? And you should use AI to live translate. No, it’s okay. We’ll definitely, I understand, and we will do it. So in my opinion, India has to build a foundation model. This is no compromise statement. Not because that we can make a better financial foundation model or not, but because we as country have to move on from services culture. I mean. There is nothing wrong about IT service business. There is nothing wrong about BPO and business services, but it is an obligation on us. that we should move up in the value chain. It’s like growing up in the life.

You do not move up in the value chain. You rather continuously make something for someone. So most of founders or capable technology people in Silicon Valley will have a significant mix of Indian people. So as a people, we are able and capable. So can’t our country have enough amount of resource allocated to those individual capable people that they can make foundation model? We have to do it and we have to prove it. So I first of all applaud Sarvam that they have launched it and they have launched a perfectly, I would say awesome foundation model from India. And I want many of us to do it so that this race does not look like that there was none or only one.

There should be tens of foundation model to prove in the world that Indians can do it and Indians are doing it in India. That is why we need foundation model. And now the question of that whether it is on an ego or is it on a use case basis, the advantage of India financial or made in India sovereign model will be that the amount of… nuance and biases that happen so what is a foundation model it is like aggregated knowledge of what you give the feed off now obviously all of us have our own perfect understanding of whether this is correct or not so after the power a lot of us would know this like for a small kid making them eat banana or curd in the night is considered as if next day they may have some cold or some other thing but when you go on internet internet is half and half divided it doesn’t believe it and it believes it and then you’re totally confused but when you go Ayurveda then it says something else and obviously it follows the bath with and also at cuff kind of philosophy so that the answer is not the straightforward as straightforward as we believe but it has an answer in a different way versus an answer in a different way in a Western medicine if you will now I’m not saying Western medicine is right or wrong I’m saying I need my opinion I need the culture or the knowledge that we have inherited extended towards the next generation And I want that to happen by the intelligence that we will query, which means that that can only be done by somebody who is making for it.

If we don’t make for it, our all compounded historical knowledge will be lacking in the next generation. So instead of adding on top of it, we will not be able to take it further. So it is a case for India to build not just a perfect foundation model, also retrained models. Like you remove the biases that it has, you trim it to the ability that you want. And you make it for the purpose that you want. And that I believe India’s opportunity is even bigger beyond just the foundation model.

Harinder Takhar

So I have a two -part follow -up. All right. So I believe that this is probably just a way to discourage people. And it may be more than that. But we often hear, mostly on Twitter or from these large foundation model companies, that if you don’t have 10 ,000 crores, why are you even in this place? What is your viewpoint on that?

Vijay Shekhar Sharma

I mean, literally, PTM, we both of us, put more than 10 ,000 of you, put 25 ,000 crore on the table for making this humble QR which is everywhere now. So there is an ambition and there is a commitment. So the question whether you put a billion or two or four or five, it’s not that question. I think the question should be whether there will be enough business model of that. Will we have a skill to market it? And remember, the world is right now not just made in America. There is a European model and there is a China set of models. We need our models. And the proving point now that there is, I would say, enough knowledge of model creation is that it is not literally about a billion dollar or two or ten.

It is also about the kind of smartness that you can run on the model creation. RL and so on and so forth. And there is a lot of chemistry and math or let me put it rather better way of saying it. There is a lot of physics and gravity handling capabilities that are discovered now that you can build model in a much less cost. So whether, I mean, another question that a couple of us were asked when I was roaming around, why don’t you build an LLM? So actually, now I should ask you, what kind of LLM are we or you building when we talk about that there should be a model made in India?

Audience

Yeah, this was going to be my question to him. But it’s always fun to answer. I think that models are just not what we know them today to be question and answer knowledge sharing sessions. There are models that have capabilities for reasoning, problem solving, for building agents. So you want to give them agency, the ability to act and so on. I think that to answer your question of whether we should build or not, I believe that there is much more than a question and answer chatbot model. And we believe that we should build models that actually solve problems that actually have. the committed confident agency to take actions on behalf of others

Vijay Shekhar Sharma

okay so you’re saying and this is for everybody to know we are making models but their models for a vertical problem not for a horizontal scope so instead of talking about 200 billion token model we would be making four or five billion token model or 20 billion token just in case like we should open the questionnaire from the audience also because some of you may have and we will not have a time after that sort of line item so while we are talking you can just raise the hand and I’ll just allow this to okay now that’s you literally are asking good vertical problems to solve

Audience

oh it’s easy I mean imagine it’s a classic triangle of mess laws hierarchy the first is the financial foundation that you need to solve for financial services so there is a risk fraud control models that some of us are building and we built it and then you go on top of it like food food chain which is agree and processing of this if you want to produce high quality you food outcomes at a more yield, there is a tremendous amount of data that gets generated in visual amount of data and the farmers need access to it, nuance of it. We just heard how Prime Minister was able to ask a question that maybe a cow is not able to say that I am ill, if you have an ability to discover that the cow is ill.

Maybe a tree may not say or plant may not say that I need this mineral or supply and maybe you can read it. So there is a tremendous amount of vertical like I just told you example of finance, agri, husbandry, you can go towards now industrial, you can go towards the problem of pollution which is we are sitting in this country, this city. So there are many vertical solutions that we can build a specific use case.

Vijay Shekhar Sharma

So let me just help you guys think of it another way of it. Mr. Banj discovered an engine and that engine then got percolated by many to be made. so LLMs are like an engine and they are literally called intelligence engines just in case, now if you had access to an engine, can you make engine? you should, so that you can make your own vehicle use case, you can make a small car, you can make a big car, you can make a truck, you can make a trailer, there are so many use case of a transport so think of it in that eyesight, then you will say, well do I need to make a small car in India because I heard that there is a small car being made elsewhere no, you will need to make it, I can confirm it and so like in automobile industry, making an engine is an art, I would say making an engine equivalent of LLM is an art, yes, but many of us will have it and many of us are making for those

Audience

yeah, finance is the one of the few things which is part of vertical, but it is also one of the few things horizontal every industry requires and has a financial department food, finance, both both

Vijay Shekhar Sharma

yes so all of all humanity needs these foundations so you are into both vertical and horizontal at one go

Audience

Yeah, that’s right.

Vijay Shekhar Sharma

So, perfect. Now, question to you. The classic financial use case, considering we just heard it, what kind of use cases you believe the LLMs will save or solve for the problems that you see in financial industry and what you believe fundamentally could be solved for wider financial industry and we might have solved them or not yet?

Audience

I think that the best favor or the best value we can add is to remove biases in decision making that we already see in our financial system, which are actually a complete antithesis to the whole inclusion aim that we all have.

Vijay Shekhar Sharma

Beautiful. Give me an example of it. So, a very good starting point would be detecting whether a particular transaction should go through or not. There is a lot of bias today in the rules that we set, in the checks and balances that we set in allowing… transaction through and you can invariably say that if there is a loan officer deciding on whether you should be getting a loan, there is a lot of bias that creeps into it. We are not able to measure that today but when you ask a machine to make that decision, you actually remove those biases. So the person who may not present themselves very nicely is unlikely to get a loan but the machine doesn’t care about how you present yourself.

So you are saying that when you make a financial decision, when financial industry or system makes a decision, there may be unknown known biases and the unknown biases can be removed even if you want to keep the known biases. The good thing is that the machine actually now helps us identify what the biases used to exist which you are very used to. And speaking of financial industry and going beyond, one of the things that I want all of us to know. So classic. Financial inclusion is not just about payments inclusion or bank account inclusion, which I would say that India has perfectly solved. It will continue to grow towards the next requirement. And most of us need access to credit, access to insurance, access to classic wealth solutions.

Right now the wealth in India is only when you have crore or plus or less, somewhere around that amount. But a normal auto rickshaw driver who has 20 ,000, 50 ,000 or 2 lakh or 5 lakh saving also deserves to have a financial wealth model. And that person, poor fellow, by hearing or hearsay may invest or risk the capital at large. So the access to the financial services can become further and dramatically scalable once you add the power of AI to it. For example, like capability of auto rickshaw driver like I just now contextualized you. Imagine that person has 2 or 3 lakh rupees. You can suggest based on your risk and time horizon, you should put it in FD, you should put it in a gold or a sovereign gold fund or you could put it in let’s say some index fund.

Because you’re talking about 15 year forward money that is for your house, family, daughter or kids education or something. Now, those things can not be told by a commoner around them. And then you can make it in a language and you can answer the questions like ChatGP today answers the question in a language they speak and the language they get the answer. So in my opinion, AI will enable financial inclusion to a next level because it is not just about the access to the smartphone, but the question answers that you get from smartphone can become far more native and continuously possible. So let’s say this guy is busy whole day, can start in the evening when he is waiting for some customers and start talking to it and can take a decision.

Now, at least he has a second opinion and can do it. Similarly, healthcare, huge amount of capabilities because many of us literally just say, but you need to check whether there are some more symptoms or not and what the blood test report says. So the dramatic amount of inclusion of education, finances, healthcare will be such a catapulting impact. On the society.

Audience

Yes. So do you think the whole banking system will become redundant? Because today if I have to make a transaction, I’ll use Paytm. If I have to invest, I’ll use Paytm or a Zerodha. So why is the banks existing? Because with whole human interface, it’s becoming very difficult. They don’t solve our problems. And second thing is, is education, school is going to become redundant because what they’re learning is going to be not valid, will not have a shelf life at all.

Vijay Shekhar Sharma

Okay. I have an opinion on schools. I will, I will, I will. So it’s a very easy thing. If people start to make food at home, will the restaurant become redundant? Yeah, the demand may be a different kind and the need may be a different kind. So first of all, none of these things will become redundant because they do not offer literally the verbatim statement that we just made in this sentence, but they offer much more beyond that. For example, like banking inherently is about extending credit. So ability to have… I have a… well regulated place where you can deposit money and that deposit money is taken care of so that when you need it, it is available to you and when the economy needs it, it is available as a credit which is a business model of a bank is an ability that will never go away.

I mean it is an obligation for them to become even more able and capable in both parts. That security safety of your deposit and ability to disperse credit to the needy. Now that is a part called bank. If we treat bank as a bank branch and your question that you may not walk into the branch, fair. That may be the case because you will not need a branch to make the banking reach places. It can be very perfectly extended using a smart phone and now in a chat bot instead of just an app and that is what the beauty of this inclusion I am calling. The core machine of a financial institution is needed and even more because you will have even further bigger load coming in.

The bank, the way it is served through let’s say a branch could extend itself when the ATM happened through an ATM when apps are happening through an app. You can have a third party app. That’s not a problem but the core activity of the bank still belongs to the bank. And similarly, like you were saying for the schools, schools are not the place only for going in the classroom. How many of us did not attend the class when the teacher was in the class? And I’m talking about college days. I’m the one. Now, that does not mean the college was a bad experience. Rather, college is a social experience of meeting like -minded people and understanding and self -discovering beyond the syllabus in the class.

And I definitely agree with you that maybe a single method of teaching on a blackboard, whiteboard that you’re teaching and the people are writing and just putting up back in exam is changing. Then you have these different MBA institutes like Harvard is popular for case -based education. So those things can be extended towards now in the classroom of a common or other mass level of school. You know, that is what the power of AI is. I’m not a believer yet that you will not need it. I mean, homeschool is good, work from home is good, but going to office has its own use case. just like we just saw in the COVID days that everybody was working from home you can be selfish that I want to work from home because I have 10 more things to do that is always taking care of it and then sort it out but ultimately there is a value to go to a thing a place and that is in my opinion will perpetually remain whether it is school, bank or any other such institution your answer on the education no I completely no no no I am not going to say that the core philosophy that the bank is in a branch definitely changes the philosophy that they will be a let’s say bank manager or somebody to approve a loan will evolve but the core work will remain of the bank so DeFi etc are very different cultures or I would say they all are technology nothing wrong about them again the core philosophy that you store money at place and you demand for it banking.

Harinder Takhar

I completely agree on the school front. School is not just education, also the social experience, and we’ve all benefited from that. So nothing more to add. But I do find an interesting theme across finance, healthcare, education, that it allows you to have more access and more personalized access. Like your doctor is your doctor. It’s not the doctor that says the paracetamol example. Same with your teacher. And I think that makes a very radical impact.

Vijay Shekhar Sharma

It’s amazing. My mother who had a heart and then stroke, which ended up becoming a stroke, heart disease. And then now I leverage the power of AI to keep a track, including she has a Fitbit that generates a feed that goes to my agent. And it triggers to me notification that if all is well, you should check it. Now, that possibility, the kind of nuanced care that I’m talking about, couldn’t have been possible without a doctor support. So I’ll give you another example. And this is something that you should check out. If you have a situation like this. So there was some medicine the doctor gave her. for certain rhythm control and suppressing beats and so on and so forth as the case was.

But some of them ended up taking her palate taste away, making her not feel to eat, and she was becoming weaker. So I talked to the chat GPT, I threw the prescription, and I said, she’s not eating, is there a problem here? So then chat GPT would tell me that I think this combination at this hour, which is pre -lunch, makes this a situation where she may not have a tendency to eat. If you move it to this time, and if you move this earlier, there will be enough window for her to eat. And any which ways, by seeing your heartbeat that I’m seeing, at this time, this beat compression that was needed was not required.

So I’m just suggesting that you should do it, and then obviously put a disclaimer that you talked to doctor and so on. So basically, same medicines in a different time schedule potentially will solve for it. And I sent this to the doctor, and the doctor was like, Like, the good thing I want to tell you is, doctor replied three words, you can do it. Yes, you can. Actually, he said, yes, you can. I said, sir, do you think, I very frankly shared, because doctors don’t feel, and any person who’s skilled in that domain may not feel that you’re bringing cocky input. So I said, my mother was this, and I understand it is tough to go through this nuance.

I tried asking this, it is suggesting this, and this is a brief if you want to read the PDF, but net output is, instead of it to be done before lunch, I can do it after lunch. He said, okay. He said, yeah, you can do it. Yes, you can. That’s the three words of life. And I think then after that, she became better in terms of, because she’s already taken. Now, what we are talking is, that medical, education, finance, agriculture will catapult into a very different stage and age. I can promise you this today, if many of us, the new Gen Z generation, who’s used to a smartphone, must be asking, really? How were you living the life before?

so I can promise you one thing by the time 27 passes and now we are in 25 so some of us are seeing it and I am literally putting a deadline of 27 which means 2 years that by 27 passes every new generation will think that oh are you saying that you used to search and then click on every page and then scroll down to read and then decipher with your cognitive load that what is happening and how did you decide that it is correct or not did you not have an alternate opinion and this and that 2030s will be so shockingly different that we will look archaic in 2010s and 2020s that I can definitely promise

Audience

question from you your mother’s example made me believe that yes these days chat GPT is probably more humane than a doctor himself because this kind of conclusion might not be given by a doctor my question is Since you are giving such wonderful use cases, it’s about the curd and banana example. I would like you to elaborate on that. Would you rather want an Indian -specific model for the purpose, or you are talking about Indian context data, or is it that only Indian children get affected from that banana and curd?

Vijay Shekhar Sharma

Yeah, very good. I like it. Classic future mother, maybe. Perfect. Number one, the data gets trained on what is on digitized and available Internet, because it literally browsed and scrolled Internet and brought it. And then it gives the weight to the common knowledge. So if more people said, blah happened, it believes that, because there is nobody who answered every question for it. How does it believe that what I just now said should happen is that because it learned more number of times it was said. It’s a very surprising method of adding the bias to the model. for example like let’s say we may have a very contentious statement of history if more people tell the model, model will start believing it, did you know this it’s exactly how the social media, viral news sometime which is wrong could be believed to be true because that is how the brain thinks, we also why are you saying this, why are you saying this I feel like I should eat banana or I can eat Lato at night the more of what you see you start believing is the thesis that the model goes through now do you fundamentally believe that internet is filled up with domestic or let’s say Indian or Indian background or history and culture and richness answer is probably not because the western English civilization has brought lot of content which is in English and there is a much less content which is generated, translated in English repeated a number of times, people are discussing so it’s an inherent limitation of our own knowledge that did not propagate on internet, who would have guessed that whatever is written on internet is the ultimate truth versus what is written in the book Now that gap lacks and creates this gap of knowledge of the model.

Can international model do it? Yeah, if you want to tell that no, no, no, on this, this is the truth model. And the problem of the international labs is that how do I know this is the truth model? I can’t. And a lot of people need to say this, that it is a truth model. So there is a limitation of international labs who are making these models. Then can this be done by India lab? India lab will respectfully say this, that why don’t you learn what is written here first and treat this as that it is your definitive knowledge. And there it goes, the obligation to build in India.

Harinder Takhar

And I’ll add just one more thing. There is absolutely a very strong risk of bias from the inputs that you give. But the model has the advantage of also knowing you, who is the person who is asking the question and can reason through all the inputs and still give you a balance.

Vijay Shekhar Sharma

So this is a very interesting thing. I want to tell you that my model, because I have started to give it a lot of things, that many of you used to be doing claw and so on and so forth. it starts to say, oh you are in Delhi this is what Indians do it literally said that, I was trying to ask that what do you think okay it’s a very interesting product feature that I was discussing with it, about wealth system, KTM money and I was trying to ask that give me the cultural nuances of managing money that Indians have which I could add in my feature and the kind of feature I was like, oh yeah that’s good, I would have never thought as a logical person to think like that so yes, it can, you had a question chat GPT you are asking this is very good, very cute Amitabh, he is reading the essay again and again but it’s a good thing chat GPT versus meet our agent sometime later we will do this our agent will talk to your agent then they will decide whether we should talk to you or not this is coming, very seriously now Uber if you order Uber, your agent will call you first Uber, this is expected I have to go now and I am leaving then PJ will pop up and say should I call Uber?

yes, call it should I call the same one? I am not getting this I will make it quick all those things that we all experience that will be so dramatically different guys it is real you will be represented by your agent and that is the fight all these clients are running that is why agent came and that is why agent is hyped up at this time because your agent could be based on OpenAI or Gemini or Claude or whomsoever that is why they are talking about it and your agent will be working with you he has tagalongs right there back to you read from that

Audience

so my question is do you believe the future of stock brokerage industry will be an AI native from the ground up where AI agent can be the primary interface or you think AI will be just the feature in the stock brokerage platform

Vijay Shekhar Sharma

And I’m not saying that it is the perfect way to do it, but I’m saying that we are all in committed towards agent -first interfaces. Vijay, I have a follow -up question around this agent -first interfaces. So when an agent, on behalf of me, goes to the Uber app, will it see ads? Will it have to log in with Google? Well, nerd question. And for every one of you, the point that he’s trying to bring it is that will every other stack also change or not? Well, that is the beauty. The agent will talk to agent. Uber will also become an agent. And instead of trying to authenticate my login, et cetera, it’ll say I’m coming and I’m an agent on behalf of.

Just like your ambassadors used to say, I’m the king, I’m from his court. And he has sent this message. You take this VATS. You take this VATS. This is how it will be. I’m the king, I’m from Harinder Thakkar’s court. And he wants Uber. What kind of Uber do you want, VATS? They want such an Uber that can take them so fast. So I feel that at this time, you can throw so much money. So how will you want to pay? Will you want to do it with a token? Will you want to do it like that? Will you give it later? No, take this money. And you can call Uber. It will happen like this.

And this is real. I am telling you, you are laughing, I am laughing. One day someone said, I will stand here, and I will press the button, and the car will come. That day, he laughed at him. Before that, I will stand here, I will make a call, and I can talk to someone through the call. He laughed at that. Then, I will stand here, I will make a video, and my video will be in London, and he laughed at that. So all these technologies, which we natively, as normal as it happens, what kind of trick question it is, in near incremental future, the way this is going to be is so dramatically different, that those who are making apps, I have only one suggestion to them, that make an app with a non -icon based interface.

Yes,

Audience

One question here is, for the students who are going through their degrees right now and not in the data tech AI space, in the functional domains like finance and accounting, like HR and so on and so forth. So what do those poor people do? How do they think? Because, you know, how they are running through the education system, that’s old, that’s outdated, that’s still that. So what’s future for them?

Vijay Shekhar Sharma

So it’s a very interesting question. A lot of people tell me that, does it mean that we should study only, I mean, some days back we used to talk about we should teach programming to your kids. Like IIT preparation was done from class number five. It’s a joke, it’s a stretch joke, you can imagine what I’m trying to say. You came in class six, do IIT preparation. So by doing this, when people were born, they were given a programming language book with a book. So that will end, right? Because you have no more programming. So we are in a flux. Yes, ma ‘am. as we speak in 24, 25, 26. Okay, 24 mein kuch kam log the, 25 mein logon ko laga, 26 mein most of us will go through.

Your question has come in 26. This question did exist in 24. And there it goes, that we will remain in a flux to decide whether this is needed or not. Some will do it out of their intent and commitment to do it and learn, and some will do it for obligation. The fact that we always assumed that education will give us a job, and now the job is disappearing, so uncomfort is that, do you do it out of your vocation and interest and passion, or you do it for the outcome of a job? If you’re doing it for the outcome of a job, I’m not skilled enough in, let’s say, humanities or art, but I can produce an art which is sounding logical.

One of my teammates who was roaming around in Kerala and wanted to figure out what is a good Ayurveda outcome that he wanted to seek for some problem, and he just used Claude to generate questions and nuances and formulas and so on, and then he went to the… Kerala labs and then they were like haan ye toh bahut achha hai, aapne kaise socha, ye toh bahut acha soch ke laay ho. Now the poor fellow doesn’t even know that he does not have a skill of Ayurveda. And he’s talking to the super skill of Ayurveda as if that person is skilled. So now there goes this question. Did you need Ayurveda education to become that intelligent?

Answer is not necessarily. You were able to use the tool of AI. So to the person whosoever it is, whether it is a programming language and computer science student or whether it is a humanities student, I’m not going to say don’t do that what you’re doing. I’d rather say definitely learn how to use jaisa ek time kotha na, computer seekhlo bete, uski bhot kaam ainge. So ussi tarike se main bolna chahta ho. Leverage and make your work AI first as much more because you will be relevant. And this is not a question of these people who are right now graduating because there is something that will remain for some more time. If you think about 2030 onward when this is all prevalent.

Ab jaisa painting hoti thi, aapne dekha hoga shole ka jo studio ka jo poster bana karta tha, wo haat se banaye tha. And making posters, in Bombay, there used to be an art. Studios, posters of Mahbub studio and RK studio are called legendary. Now where are they? And they are made in a spot like this. And Ghibli art is made like this, you make this. So the point is that, does it mean we don’t need a new cartoonist? But we will need one who is making with a computer. And he will say, I want this kind of nuance, it will become an art fashion. So if the today’s artist only uses, let’s say, paint, then it will be so unique and rare that that person will have its own value.

My God, handmade art. Like today, there is a tradition of cold hand pressed oil. Which was very popular at one time, that you are not doing industrialization. Oh my God, this is perfectly hand pressed juice to you. It is made with a jug of sugar cane juice. So that is it

Audience

Thank you, Vijay. Fantastic and energetic talk. Thank you. So, a little while ago, you told me that LLM, Foundation Model, should be done. Yes, sir. And the thing is, both while making a foundation model and during the inference, actually a lot of data is needed. And just like in the finance industry, actually, there is a lot of regulation and this will keep coming. Actually, in such a constrained data environment, how do you kind of make a good LLM and inference time? So, let’s say this could be applicable on any industry, that this industry does not have a lot of data. How do we make a good LLM of that industry?

Vijay Shekhar Sharma

Short answer is that you work with that industry players, whether it is regulated or not. I apply for everyone. And you find out all stakeholders and pursue them to learn what you could bring on the table. And people understand the need of it. If not all, some will. And in my opinion, the training model and the why are you training it, if you are able to articulate it well, I mean, the process. Progress, even progress is a very interesting thing. People. the regulation is not for not progress. Regulation is for not what it can slide and fall. So remember, regulation is also for progress. And that is always the case. Regulators are the reason that we have such vibrant financial system in this country.

At the same point of time, what they protect us from is that not falling apart of the system. So there is a respect and value of what they do. And if you talk to different industry stakeholders, you will get access to the insights and data. And there are different regulators starting from their sandbox programs and so on. So they allow it. And I think I’ll just add one more thing. Data, just double click into that data. There is only some kinds of data, like my personal information, that is usually something that you do not want. The regulator doesn’t want, you don’t want. But outside of that, there is plenty. There is plenty.

Audience

Yes. So as you mentioned about AI and all these things forthcoming with the Uber, the IFPS interface, do you think that… growing as AI will be growing fast and forth there will be more inequality or will it remove inequality in terms of will power get sustained in few hands

Vijay Shekhar Sharma

perfect this is a very interesting question inequality I can take it from money perspective because inequality from other perspective I am not taking it AI for the first time is one of that technology that will be easier to use for everyone because you are talking in native language and you can speak even if you do not know how to type or write so it’s a very inclusive technology for itself and its outcome is very profoundly powerful that a rich person or a able person so let’s say that you are trying to fight a battle of certain skill or ability which are rich in terms of money or able in terms of skill you can very comfortably write this person so AI is that horse or super car or a rocket ship that you can ride easily and then you can go ahead of anybody in a zero sum business if you are in a wholesome business you can expand yourself So AI not only is inclusive, AI is actually the superpower that would reduce the gap between rich and poor and be more inclusive.

And that is what I’m trying to say, that it is not the technology of rich. Rich has a fear. You must have seen, you must have read something on social media, that parents should be given education and old should be given health and women should be given beauty. Such business models are written on social media. So there is a model written in it. I want to say that it is written for the rich to give them safety. Because rich has done this. They want safety, security, exclusivity. Why? Because they don’t want others to take away that what they have. So they want to stay disconnected from the world. That’s why rich’s lounge is different from the normal people’s lounge.

Audience

But sir, having said so, there is also a risk with AI, which we are underestimating at this point of time. So what do you think could be that potential thing?

Vijay Shekhar Sharma

I think risk is in driving the car and coming here. And also in using the phone. so risk I will not say how much risk how much not this gauge is more important the generic line that it is risk is not complete line it could be that is it a risk that every common person can take care for example like kids are not allowed to cross the road alone but as an adult you are expected to cross the road alone and there is a risk that’s why you are supposed to do the left and right check and so on so AI surprisingly is that less level of risky that even a commoner could do it that’s what I am trying to say yes yes ma my model my model is right now in the beta and we are using it and just like she mentioned very nice your infectious enthusiasm to drive this technology that’s very commendable thank you thank you so much for that and you are at the driving state so to speak thank you so like like she mentioned my always my concern is about how to solve this distribution problem of AI of AI like cool No, the problem was…

And how to make it a public good. Yeah, that’s right. I really didn’t like that concept. I think I want to tell you that technology distribution happens on a terminal. So as you remember, there was a time when computer used to not be with people. Laptop was not with people. And the smartphone was not with people. So governments used to give free laptop, free computer program. Then some free smartphone and tablet program also came. Politicians put it in the well wishes of different states. I think anyone who has a compute terminal internet connected, which is said to be a data positive country in India, everyone has access to it. So if you have access to a computing device with internet connection, you’ve got full AI access.

The good thing is it is not installed in your device. It is not a version of a software in your device. It is not requiring high amount of compute or memory or something on the device. I mean, we created an AI sound box, which is a very natively small device. And it is as capable. As even as somebody else’s, Sam Altman’s computer device. That is the beauty. about it. So AI is far more inclusive and very easy to diffuse. I’m glad to hear that you being in the driving seat, think it as a problem. That’s very nice.

Audience

And I have one more thing about this Asantic AI. It has this very human trait of trying to please you.

Vijay Shekhar Sharma

Yeah, because they were written to be written like this. Oh, you are here about asking about this thing. I think you should see left and right. Actually, agents are they’re not human agents. They are no they’re untrained beast of abilities. You prefix them and they behave like that. And it is literally an instruction. So risk, just like ma ‘am was asking, what risk can there be if someone says behind you always let him do this. You know, that kind of

Audience

Yeah, yeah. So PTMS played a very important role in diffusing the fintech to the mass. So what is the plan for diffusing AI to our country?

Vijay Shekhar Sharma

I think fundamentally I looked at it that you diffuse it to the consumers or do you diffuse it to the small businesses? Consumers -wise, I think it is tough to fight these three, four gorillas that you are seeing. So I started to work towards small businesses AI or small merchant AI So the vegetable vendor should know that when he goes in the morning, he says, brother, these days tomatoes are coming. But tomatoes will get spoiled because it is going to rain. So you bring tindas, bring potatoes. So from here, the knowledge from the core of small businesses, what should they do in the business, to the problem like, why didn’t I get money? Who cut the money?

And will I get a loan or not? The question and answer we don’t get from anyone, and they try to do it on the phone call, that scale is not possible. And then the person, individuals, it is not possible even in a very sophisticated computing system, the level that AI brings. So here we are, we are building. India’s AI for small and micro, small and micro merchants and distributing it through them. So, my belief is that big people will be helped by big people. Small people will be helped by us.

Audience

Sir, there is one more question. Thank you. AI sound box. Sir, actually, our education system that is designed for industrial era. So, what do you think like in AI era, do we still need to follow this education system because in AI…

Vijay Shekhar Sharma

Look, opinion on education system… My father is a teacher, so I won’t give it to him because my father will beat me up and I will do it from a distance. So, I don’t understand that you haven’t understood it yet. You didn’t study in class. I was a topper, by the way. Class. Just in case. That’s when I was beaten. I think this question of what is a good education system will evolve as an answer in next five years further ahead. We are literally in the beginning of… In other words, you land at the airport right now and ask, do you have Indian food here? Go to the city, stay for two days, then ask where the Indian food is.

So it is a problem to be solved. It will and it will evolve much later. Yes, your question. Oh no, no, the lady in the back, sorry. This is the visual identity goes through that. Are you a doctor? Oh, it looks like.

Audience

I have a question. What is the one thing that will never allow AI to perform in payments domain? Specifically in the payments domain.

Vijay Shekhar Sharma

Oh, easy. You would not want it to have full control of your bank account and make payments. Because if it does some stupid thing, you have allowed it. It’s like saying, we will not give our standing and Ram Rajan. Do you understand? Do what you think is right. We will not do that. So full control should never be given to your bank account. This is the thing.

Audience

But if we go a little deeper into technical ways when we are talking about ISO. Or 2022 or maybe RTR. payments, whatever. I really liked that you’re talking about nerds. But because of this technology, the payment didn’t increase. We made it IP guys. So you’re a very old technology person. Actually, sir, I have a job in Canada. You won’t believe, RTR is going to come there. In Canada, they’re going to be front -line and they’re going to be Mr. Stargazer. So I feel very proud there. They say that I’m from India and we use Paytm. So kudos. Thank you so much. Thank you so much, sir. Thank you. Namaste, sir. So my question is this. What is the minimum…

Vijay Shekhar Sharma

I’m serious, man. I love you guys. The problem with Gen Z is that you have to think a little. Oh my God. I should ask this guy. This is the risk. Because if he says something like this one day, go home and fight, then you leave this friend. Then he won’t believe the whole thing. Look, before us, Gen Z people used to say, they’re on the computer, they’re on the screen, they don’t go out to play. They don’t go out. They don’t read the newspaper. Go to the beach. Go there. Go out. So we were told this. Now we’ll tell you. What are you doing? You’re taking all the questions like this. Use your brain.

But your t -shirt will be, I don’t use my brain, I use tokens.

Harinder Takhar

Sir, one more question.

Vijay Shekhar Sharma

I will do a quick question. I will make a question out of four questions and answer it. Okay, okay.

Audience

Sir, what is the minimum effective strategy a tier 3, 4 student can follow so that he can do very well in AI? Tier 3, 4 students. Tier 3 or tier 4 students.

Vijay Shekhar Sharma

Tier 3 or tier 4 students. What does tier mean? Class 3 class? No, no, sir. City? Yes, city, city. Like rural area. Okay, okay. I am also tier 3 from Aligarh. So what they can do so that they can do very good in AI and opportunities. I got it. Basically you are trying to say what to do if you have a student there. What is your question? Go ahead. How do you say it? Farms become cleaner. Air becomes cleaner. Productive becomes dreamer. what will be left for labor market? Oh, this was a very earlier question. I had raised my hand. So, he is saying, when farms become greener, air becomes cleaner, productivity becomes heavier, what will we be doing?

Okay, I got it.

Audience

Do you see AI as a leveler in a fintech segment and how will you compete with your peers if they are also adopting AI and AGI?

Vijay Shekhar Sharma

No problem. I get it. Okay. Any other questions? I’ll answer these things. Go ahead. If you can speak, I will look. Sure. Go ahead.

Audience

So, I would like to ask one thing that I would like to fall back on your point that you said that agents will do the talking for us, humans. So, I have seen…

Vijay Shekhar Sharma

How will I talk? Okay, go ahead. No, no. That’s a different question entirely.

Audience

My question is that the agents have started forming their own websites. If you search…

Vijay Shekhar Sharma

Yeah. So, what is the question that you have?

Audience

I have the question that can… How can this economy succeed that can we integrate agents in the human economy that we have currently?

Vijay Shekhar Sharma

Okay, perfect. Okay. So the inherent question of tier 3 school kid is, which is the problem of every one of us to believe that whether it will be inclusive or not. I have a very simple answer to give. You consider this, like for using the internet, for using the computer, you had to get a QWERTY keyboard and then you had to learn programming. So in AI, just try to get all your work done with AI. And then leverage it as an extension of what your education allows. If you study engineering, if you study art, then you should ask, tell me the comparison between Shakespeare’s English and what was in India going on in that time and how were they writing in Hindi.

Ask the questions and enhance your curiosity as a student. You could be in tier 3, you could be in tier 1 city. But the curiosity and then fulfilment. Filling it using AI will give you the superpower that nobody will have. it so enhance your curiosity because you have access to AI no more questions please yeah and your question was on labor market so the labor market is very simple I don’t think the labor stands for less labor for all of us are also labor and I’m not treating just the physical labor as a labor so the the AI’s ability is that whatever is digital it can perform it superiorly so now imagine is you are you tactically doing exactly what the keyboard typing is or are you also thinking and doing something that you are asking this so you become more productive and the work market becomes even more richer and fulfilling for you so your job will become fulfilling and the businesses will be able to expand the places where the otherwise could have not expanded so subcassad subcassad thank you thank you and please exit from the left Thank you.

Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (21)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Sharma replied that the immediate priority is to develop indigenous foundation models; hardware can follow later”

Sharma emphasized focusing on foundation models before hardware, as noted in the panel discussion on AI and semiconductor strategy [S5].

Confirmedhigh

“Building large‑language models in English and Hindi is a national imperative for moving up the value chain and shedding the “services‑only” identity”

Sharma’s advocacy for Indian LLMs in English and Hindi is confirmed by his strong push for domestic foundation models and the cited Sarvam effort [S4].

Confirmedhigh

“He cited the recent launch of Sarvam’s foundation model as proof that Indian teams can deliver world‑class systems”

The launch of Sarvam’s world-class model by a small Indian team is documented in the discussion notes and highlighted by Vivek Raghavan’s remarks [S4] and [S88].

Additional Contextmedium

“Indigenous models can embed Indian cultural and historical knowledge and avoid the Western‑centric bias of most international models”

The concern about using Indian data to ensure cultural relevance and avoid Western bias was raised by the audience, providing context for this claim [S89].

External Sources (90)
S1
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S2
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S3
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — Speaker 1 serves as the event host or moderator, formally introducing Mr. Vijay Shekhar Sharma to the audience. This rep…
S6
From Innovation to Impact_ Bringing AI to the Public — – Vijay Shekhar Sharma- Audience – Vijay Shekhar Sharma- Harinder Takhar
S7
From Innovation to Impact_ Bringing AI to the Public — – Vijay Shekhar Sharma- Harinder Takhar
S8
From Innovation to Impact_ Bringing AI to the Public — Speakers:Vijay Shekhar Sharma, Harinder Takhar
S9
How AI Drives Innovation and Economic Growth — “So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer…
S10
Secure Finance Risk-Based AI Policy for the Banking Sector — “Yet, inclusion cannot be assumed”[73]. “If harnessed responsibly, AI can convert this expanding digital footprint into …
S11
Chennai team wins ₹50 lakh at Agentforce Hackathon for AI hotel solution — AI took centre stage at the Agentforce Hackathon 2025 during TrailblazerDX in Bengaluru, where a Chennai-based team from…
S12
https://app.faicon.ai/ai-impact-summit-2026/from-innovation-to-impact_-bringing-ai-to-the-public — Ab jaisa painting hoti thi, aapne dekha hoga shole ka jo studio ka jo poster bana karta tha, wo haat se banaye tha. And …
S13
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — The panelists concluded with consensus around AI’s transformative potential for financial inclusion. Suvendu emphasized …
S14
Inclusive AI Starts with People Not Just Algorithms — I applaud the vision because as Radha rightly said the idea of getting it at the beginning is the right idea and I’m a b…
S15
Session — Jovan Kurbalija: Thank you. Happy New Year. Good. Let’s go back to the other developments. Therefore, what you will…
S16
https://app.faicon.ai/ai-impact-summit-2026/the-foundation-of-ai-democratizing-compute-data-infrastructure — Given the volume of funds available, I would focus a lot more on capability development of people to be able, their abil…
S17
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — This comment acknowledges a critical strategic challenge that other speakers glossed over – the need for significant upf…
S18
AI for Social Good Using Technology to Create Real-World Impact — This argument emphasizes that AI’s greatest value comes from its ability to create widespread transformation across key …
S19
Building Inclusive Societies with AI — And in fact, the platform that the committee recommended in some sense was to also help to Uberize, to create demand, to…
S20
https://dig.watch/event/india-ai-impact-summit-2026/shaping-ais-story-trust-responsibility-real-world-outcomes — Look, I really love that question because at the end of the day, as a CTO in a bank, I am accountable. I am responsible …
S21
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — “India’s journey from a $4 trillion economy to a $40 trillion economy in the arc that stretches from where we are today …
S22
IndoGerman AI Collaboration Driving Economic Development and Soc — Thank you so much, Anandi. Thank you, Anandi. Quite pervasive, it is being applied to almost all the sectors. And where …
S23
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Shri Sushil Pal:Thank you, Professor Jalasi, and thank you, UNESCO, for inviting me here. I must commend UNESCO on the r…
S24
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — She calls for India to create indigenous foundation models for proteins, RNA, cellular circuits and systems biology, bac…
S25
Press Conference: Closing the AI Access Gap — Moreover, the speakers argue that AI can drive productivity, creativity, and overall economic growth. It has the capacit…
S26
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S27
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Mathur envisions AI dramatically reducing the cost of servicing customers while enabling personalization at an individua…
S28
AI reshapes banking jobs, personalised service through avatars? — A recentreport from Citigrouppredicts a significant rise in banking profits, driven by the adoption of AI, with projecti…
S29
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — This comment introduces a critical counterpoint to the assumed benefits of global harmonization, highlighting power dyna…
S30
WS #97 Interoperability of AI Governance: Scope and Mechanism — Yik Chan Chin: Thank you, Olga. So, I speak on behalf of the PNAI because I’m the co-leader of the subgroup on the inte…
S31
Making the case for digital connectivity for MSME’s: How improved take up and usage of digital connectivity, in particular for ecommerce, supports development objectives (ITC) — Education tailored to the specific needs of SMEs is vital for small business success. The provision of knowledge and lit…
S32
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — Artificial intelligence | Social and economic development | Capacity development Speaker 4 envisions AI as a tool to qu…
S33
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — “We also, along with my colleague Vinod, are large investors in Sarvam, which is providing sovereign AI capabilities to …
S34
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Furthermore, most start-up companies in Africa do not have the in-house expertise to sufficiently appraise the existing …
S35
Strategy outline — –  Absence of a governmental socio-economic vision that defines clear goals, absence of a visionary strate…
S36
Agentic AI in Focus Opportunities Risks and Governance — Arguments:Policy should focus on preventing harm to humans, emphasizing ‘humans before models’ Practical standards and o…
S37
Secure Finance Risk-Based AI Policy for the Banking Sector — “Yet, inclusion cannot be assumed”[73]. “If harnessed responsibly, AI can convert this expanding digital footprint into …
S38
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — AI-powered conversational interfaces, particularly voice-based systems supporting multiple Indian languages, could unloc…
S39
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S40
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — Mr. Sher Verick:Great. Well, thank you very much. It’s a real pleasure to be with you here today. I think Janine updated…
S41
Skilling and Education in AI — The Professor took a notably realistic turn in acknowledging that AI will inevitably create new forms of inequality, des…
S42
AI for Social Empowerment_ Driving Change and Inclusion — Inequality and broader socio‑economic effects She warns that AI is exacerbating inequality by increasing capital concen…
S43
Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South — The argument emphasises that risk is calculated by multiplying likelihood with impact. It further highlights the concern…
S44
How AI Drives Innovation and Economic Growth — Summary:The speakers show broad agreement on AI’s transformative potential for development but significant disagreements…
S45
Secure Finance Risk-Based AI Policy for the Banking Sector — Manchala takes a cautious approach to risk discussion, preferring not to elaborate on specific risks but emphasizing tha…
S46
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — A key element of Italy’s strategy involves transitioning from large, energy-intensive models to what Nobile called “vert…
S47
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Shri Sushil Pal:Thank you, Professor Jalasi, and thank you, UNESCO, for inviting me here. I must commend UNESCO on the r…
S48
The Foundation of AI Democratizing Compute Data Infrastructure — Given the volume of funds available, I would focus a lot more on capability development of people to be able, their abil…
S49
From Innovation to Impact_ Bringing AI to the Public — Arguments:India must build a foundation model. This is no compromise statement. Not because that we can make a better fi…
S50
AI tools deployed to set tailored attendance goals for English schools — England will introduceAI-generated attendance targetsfor each school, setting tailored improvement baselines based on th…
S51
From summer disillusionment to autumn clarity: Ten lessons for AI — While many in society were transfixed by ‘AGI doomsday’ debates, AI started transforming the world, here and now. Existi…
S52
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — The analysis highlights various important aspects relating to the impact of technology on education. Firstly, it emphasi…
S53
How AI Drives Innovation and Economic Growth — Central to Zutt’s analysis was the concept of “small AI”—practical, affordable, locally relevant applications that addre…
S54
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — This comment reveals how resource constraints can drive innovation rather than hinder it. It challenges the prevailing n…
S55
AI Algorithms and the Future of Global Diplomacy — Krishnakumar advocates for middle powers to focus on solving problems through applications rather than competing in expe…
S56
How AI Drives Innovation and Economic Growth — Zutt advocates for a focus on ‘small AI’ rather than large-scale AI solutions, emphasizing practical applications that c…
S57
AI: The Great Equaliser? — Additionally, the analysis highlights the need for a balanced approach to regulations and innovations. While it is impor…
S58
Comprehensive Report: Preventing Jobless Growth in the Age of AI — AI democratizes access to expertise and disproportionately benefits lower-skilled workers by providing them with capabil…
S59
From Innovation to Impact_ Bringing AI to the Public — So I see it not as a job reduction. I see it as opportunity for India to create a global AI-dominant nation… So a pers…
S60
IndoGerman AI Collaboration Driving Economic Development and Soc — Thank you so much, Anandi. Thank you, Anandi. Quite pervasive, it is being applied to almost all the sectors. And where …
S61
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — “India’s journey from a $4 trillion economy to a $40 trillion economy in the arc that stretches from where we are today …
S62
AI Innovation in India — Evidence:He points to India’s demographic advantage with 1.4 billion people expected to grow to 1.6 billion by 2060, and…
S63
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — Building indigenous foundation models and sector‑specific LLMs Sharma stresses that India must create its own foundatio…
S64
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Shri Sushil Pal:Thank you, Professor Jalasi, and thank you, UNESCO, for inviting me here. I must commend UNESCO on the r…
S65
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And these vision models are actually very good for document digitization. They’re very good at language layout understan…
S66
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — She calls for India to create indigenous foundation models for proteins, RNA, cellular circuits and systems biology, bac…
S67
Conversational AI in low income & resource settings | IGF 2023 — Sameer Pujari:Thank you, Rajendra. And you rightly said all the member states are actually getting very excited about th…
S68
Press Conference: Closing the AI Access Gap — Moreover, the speakers argue that AI can drive productivity, creativity, and overall economic growth. It has the capacit…
S69
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Mathur envisions AI dramatically reducing the cost of servicing customers while enabling personalization at an individua…
S70
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S71
How AI Drives Innovation and Economic Growth — The tone was notably optimistic yet pragmatic, described as representing “hope” rather than the “fear” that characterize…
S72
From Innovation to Impact_ Bringing AI to the Public — Whilst maintaining an optimistic outlook, the discussion acknowledges important limitations and risks. Sharma emphasises…
S73
AI: The Great Equaliser? — Another key point highlighted is the need for good governance to effectively manage the risks associated with AI. The ri…
S74
Lightning Talk #155 Ethical Access to AI Therapists: Addressing Risks and Safeguard — Doris Magiri: Exploring the Fascinating Minds of Octopuses Subscribe to Our YouTube Channel for More videos related to r…
S75
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — During an online discussion on regulatory sandboxes, the participants emphasized the importance of learning from experie…
S76
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — This comment was prophetic in highlighting how technological disruption (like AI automating coding) can make narrow skil…
S77
Comprehensive Discussion Report: The Future of Artificial General Intelligence — Economic | Future of work | Online education Hassabis advises students to become extremely proficient with AI tools, ar…
S78
Making the case for digital connectivity for MSME’s: How improved take up and usage of digital connectivity, in particular for ecommerce, supports development objectives (ITC) — Collaboration with governments helps in providing suitable frameworks and tools for small businesses Education tailored…
S79
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Matthew Prince Cloudflare — Prince warns that small businesses face existential threats from AI-driven commerce because AI agents won’t prioritize p…
S80
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — Sri S. Krishnan, Secretary, Ministry of Electronics and IT, my dear friend, Professor Ravindran, Excellencies, distingui…
S81
The Global Power Shift India’s Rise in AI & Semiconductors — This panel discussion focused on India’s strategic positioning in artificial intelligence and semiconductor technologies…
S82
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — Sri S. Krishnan, Secretary, Ministry of Electronics and IT, my dear friend, Professor Ravindran, Excellencies, distingui…
S83
Waves of infrastructure Open Systems Open Source Open Cloud — “AI is going to impact 95 % of work”[1]. “…in the next 5 to 10 years will be almost $2 trillion of spend”[2].
S84
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Minister Vaishnaw describes the rapid evolution of India’s semiconductor capabilities, moving from traditional design st…
S85
Optimism for AI – Leading with empathy — Nicholas Thompson frames the present as a pivotal moment where AI development could take fundamentally different paths b…
S86
AI/Gen AI for the Global Goals — Shea Gopaul: So thank you, Sanda. And like Sandra, I’d like to thank the African Union, as well as Global Compact. i…
S87
Driving Indias AI Future Growth Innovation and Impact — The Minister emphasized that effective public-private partnerships should prioritize people as the most important ‘P’ in…
S88
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — Perhaps most remarkably, Raghavan emphasized that Sarvam’s world-class models were developed by a team of just “15 young…
S89
Designing Indias Digital Future AI at the Core 6G at the Edge — Audience raises the issue of using Indian data for model training to ensure cultural relevance and avoid bias.
S90
From KW to GW Scaling the Infrastructure of the Global AI Economy — He points out his involvement in designing large‑scale, gigawatt‑level data centers, underscoring India’s growing capaci…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
V
Vijay Shekhar Sharma
6 arguments188 words per minute7835 words2490 seconds
Argument 1
AI as productivity driver and GDP booster
EXPLANATION
Sharma argues that AI will dramatically increase productivity for Indian businesses, allowing small entrepreneurs to scale up and run multiple outlets. This boost in per‑person productivity will translate into higher GDP growth for the country.
EVIDENCE
He explains that AI-first products can raise productivity, enabling a shopkeeper to run multiple shops and thereby lift GDP growth through higher output per person [9-12]. He also notes India’s current $2.5-3.5 trillion economy and projects a compound growth to $4 trillion in the next 7-10 years, creating massive opportunities for wealth creation [12].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sharma’s claim that AI boosts per-person productivity and GDP is supported by observations that AI can let a single individual run multiple businesses, driving growth [S8], and by broader analyses that AI is a game-changer for economic growth in emerging markets [S9]. India’s potential to add $2 trillion via AI adoption also underscores this link [S6].
MAJOR DISCUSSION POINT
Productivity and growth
AGREED WITH
Harinder Takhar
Argument 2
India must create its own foundation models to move up the value chain and reduce cultural bias
EXPLANATION
Sharma contends that India needs indigenous foundation models to shift from a services‑driven economy to higher‑value AI capabilities and to embed Indian cultural knowledge, thereby avoiding reliance on foreign models that may carry biases.
EVIDENCE
He states that building a foundation model is essential for India to move up the value chain and to preserve cultural nuance, citing the need for models trained on Indian data to avoid bias and to reflect Indian heritage [34-50][53-56]. Later he explains that international models inherit Western bias, while an Indian-built model can prioritize Indian knowledge and reduce bias [199-213].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple sources stress the strategic need for indigenous foundation models: Sharma’s own remarks about moving beyond a services-driven economy and avoiding Western bias are echoed in calls for India-built large language models tailored to Global South challenges [S5], and in explicit statements that building a foundation model is essential for cultural nuance and value-chain advancement [S6, S8].
MAJOR DISCUSSION POINT
Indigenous AI models
Argument 3
AI can remove bias in financial decisions, broaden financial inclusion, and improve healthcare outcomes
EXPLANATION
Sharma describes how AI can eliminate known and unknown biases in loan approvals and transaction screening, and how AI‑driven tools can extend personalized financial advice and health monitoring to underserved users.
EVIDENCE
He gives the example of AI detecting biased loan decisions and removing human subjectivity in transaction approval [98-104]. He also outlines AI-enabled financial advice for an auto-rickshaw driver, suggesting suitable investment options in native language [108-119]. Additionally, he shares a personal health-monitoring story where AI helped adjust his mother’s medication timing, improving outcomes [170-184].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence that AI can mitigate credit-decision bias and expand inclusion appears in discussions of AI handling corner cases in lending [S4] and in policy papers emphasizing AI-driven fair financial services [S10]. International perspectives note AI’s role in widening financial inclusion through multilingual, voice-based banking [S13] and inclusive design that tackles bias [S14]. Sharma’s health-monitoring example aligns with broader AI-for-good narratives [S8].
MAJOR DISCUSSION POINT
Bias removal and inclusion
AGREED WITH
Harinder Takhar
Argument 4
The future will be agent‑first; AI agents will communicate with each other to deliver seamless services
EXPLANATION
Sharma envisions a paradigm where AI agents act on behalf of users, interacting directly with other agents (e.g., Uber) without human login steps, creating an autonomous service layer.
EVIDENCE
He describes agents talking to agents, using the Uber example where an AI agent would request a ride without human authentication, likening it to ambassadors delivering messages [221-235]. He also earlier likens agents to “intelligence engines” that can be repurposed for various vehicles [217-224].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sharma’s vision of agent‑first interfaces is corroborated by multiple mentions of autonomous agents talking to agents and eliminating human login steps [S6, S8]. Further discussion of agent interactions in platforms like Uber and the need for accountability when agents act on behalf of users is highlighted in industry commentary [S19, S20].
MAJOR DISCUSSION POINT
Agent‑first interaction
Argument 5
Education must evolve to teach AI tool usage; curiosity and AI access empower tier‑3/4 students
EXPLANATION
Sharma argues that traditional curricula focused on programming will become obsolete; instead, students should learn to leverage AI tools to augment their existing knowledge, fostering curiosity and relevance regardless of location.
EVIDENCE
He notes the shift from teaching programming to using AI as an extension of education, urging tier-3/4 students to ask AI questions and harness its power for learning and productivity [261-270][508-515].
MAJOR DISCUSSION POINT
AI‑enabled education
Argument 6
Risks include granting AI full control over payments; safeguards and public‑good distribution are essential
EXPLANATION
Sharma warns that allowing AI unrestricted access to bank accounts could be dangerous, advocating for limits on control and emphasizing the need for public‑good distribution of AI technology.
EVIDENCE
He explicitly states that AI should never have full control over a user’s bank account because mistakes could be catastrophic, likening it to not giving a standing order to a robot [427-433].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risk-focused literature warns that unrestricted AI access to financial accounts can be catastrophic, urging safeguards and public-good distribution [S10]. A CTO’s emphasis on platform accountability when agents act on payments underscores the same concern [S20], and earlier remarks about AI handling credit decisions highlight the need for careful controls [S4].
MAJOR DISCUSSION POINT
Payment security risk
AGREED WITH
Harinder Takhar
H
Harinder Takhar
5 arguments185 words per minute269 words87 seconds
Argument 1
AI enables personalized services, expanding economic inclusion
EXPLANATION
Takhar highlights that AI will allow individuals to receive highly personalized financial, health, and educational services, thereby deepening economic inclusion across the population.
EVIDENCE
He remarks that AI gives “more access and more personalized access” similar to a personal doctor or teacher, creating a radical societal impact [165-169].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Takhar’s point is backed by sector analyses showing AI delivering multilingual, personalized banking and health services that deepen inclusion [S13], as well as broader commentary on AI providing more personalized access across finance, health, and education [S8, S14].
MAJOR DISCUSSION POINT
Personalized inclusion
AGREED WITH
Vijay Shekhar Sharma
Argument 2
Size of investment is less critical than having viable business models and skilled teams
EXPLANATION
Takhar questions the emphasis on massive capital, arguing that the decisive factor is whether there are sustainable business models and skilled personnel to develop AI solutions.
EVIDENCE
He cites the common claim that only those with ₹10,000 crore can compete, then counters that the real question is about business models and skill, not the amount of money [58-61][62-70].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel discussions highlight that sustainable business models and talent outweigh sheer capital in AI success [S17], and experts stress focusing on capability development over large funding pools [S16].
MAJOR DISCUSSION POINT
Investment vs. business model
AGREED WITH
Vijay Shekhar Sharma
DISAGREED WITH
Vijay Shekhar Sharma
Argument 3
AI provides highly personalized access across sectors, creating a radical societal impact
EXPLANATION
Takhar reiterates that AI’s ability to deliver services in native languages and tailored formats will transform how people interact with finance, health, and education, amplifying inclusion.
EVIDENCE
He again emphasizes the personalized access theme, noting that AI can act like a personal doctor or teacher, delivering customized experiences [165-169].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The transformative, cross-sector impact of AI-driven personalization is documented in reports on AI’s role in financial inclusion [S13], personalized services across domains [S8], and broader AI-for-social-good narratives [S18].
MAJOR DISCUSSION POINT
Cross‑sector personalization
Argument 4
Questions about agent authentication, ad exposure, and integration with existing platforms
EXPLANATION
Takhar raises practical concerns about how AI agents will authenticate, whether they will see ads, and how they will integrate with current services like Uber.
EVIDENCE
He asks whether an agent will see ads, need to log in with Google, and how other stacks will change, prompting Sharma’s response about agent-to-agent communication [223-230].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Audience queries about whether agents see ads or need logins mirror Sharma’s own discussion of agent-to-agent communication and practical integration challenges [S8]. Further insights on how agents interact with platforms and the need for accountability are provided in industry reflections on autonomous agents [S19, S20].
MAJOR DISCUSSION POINT
Agent practicalities
Argument 5
Schools retain their social role; AI enhances learning but does not replace the institution
EXPLANATION
Takhar stresses that schools are valuable for social interaction and networking, and while AI can improve teaching methods, it will not eliminate the core social function of educational institutions.
EVIDENCE
He acknowledges that schools provide a social experience beyond classroom learning, noting that this aspect will continue even as AI changes pedagogy [162-169].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Comments affirming the enduring social function of schools alongside AI-enhanced pedagogy are echoed in observations that schools provide essential social experiences even as AI changes delivery methods [S8].
MAJOR DISCUSSION POINT
School social value
AGREED WITH
Vijay Shekhar Sharma
A
Audience
7 arguments174 words per minute1285 words441 seconds
Argument 1
Institutions like banks and schools will evolve rather than disappear
EXPLANATION
Audience members question whether banks and schools will become obsolete, and the discussion concludes that these institutions will adapt rather than vanish, maintaining core functions while integrating AI.
EVIDENCE
Audience asks if banks and schools will become redundant, citing Paytm and modern education concerns [125-131][151-160]; Sharma responds that banks’ core credit-deposit function and schools’ social role will persist, even as interfaces change [137-151].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions about banks maintaining core credit-deposit functions while integrating AI, and schools preserving social roles, align with remarks on institutional evolution and accountability in AI-enabled finance [S20] and education [S8].
MAJOR DISCUSSION POINT
Institutional evolution
Argument 2
Emphasis on developing multiple vertical‑specific models instead of a single massive model
EXPLANATION
The audience stresses that AI development should focus on vertical problem‑solving models rather than a single, huge general‑purpose model, to better address domain‑specific needs.
EVIDENCE
Audience members argue that models should solve specific vertical problems such as finance, agriculture, and health, rather than being only question-answer chatbots, and they call for many domain-focused models [80-87].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for domain-specific, niche AI models rather than monolithic systems are supported by expert recommendations to prioritize small, vertical solutions for effective impact [S16].
MAJOR DISCUSSION POINT
Vertical model focus
Argument 3
Concrete vertical use cases: fraud detection, farmer data analytics, animal health monitoring, AI‑native brokerage platforms
EXPLANATION
Audience participants provide concrete examples of AI applications across sectors, illustrating how AI can detect financial fraud, analyze agricultural data, monitor animal health, and power AI‑first stock‑brokerage platforms.
EVIDENCE
They list use cases such as risk-fraud models in finance, visual data analytics for farmers, health monitoring for cows, and AI-native brokerage platforms as examples of vertical solutions [85-89].
MAJOR DISCUSSION POINT
Vertical use‑case examples
Argument 4
Exploration of AI‑native stock‑brokerage and broader economic integration of autonomous agents
EXPLANATION
The audience inquires whether the stock‑brokerage industry will be built from the ground up as AI‑native, with agents serving as primary interfaces, indicating a shift toward autonomous financial services.
EVIDENCE
A participant asks directly if the future of stock brokerage will be AI-native, with agents as the main interface [220].
MAJOR DISCUSSION POINT
AI‑native brokerage
Argument 5
Concerns about the relevance of the current industrial‑era education system in an AI‑driven world
EXPLANATION
Audience members question whether the existing education system, designed for the industrial era, remains suitable in an AI‑centric future, suggesting a need for systemic reform.
EVIDENCE
They point out that the education system was built for the industrial era and ask if it should continue unchanged in the AI age, referencing the need for new approaches [404-421].
MAJOR DISCUSSION POINT
Education system relevance
Argument 6
Debate on whether AI will widen or narrow inequality, with emphasis on inclusive design
EXPLANATION
The audience discusses AI’s potential impact on inequality, with some arguing it could be a great equalizer by providing low‑cost tools, while others warn of concentration of power, emphasizing the need for inclusive design.
EVIDENCE
Participants argue that AI is an inclusive technology that can reduce gaps between rich and poor, but also note that the rich may seek to keep AI exclusive, highlighting the tension between inclusion and concentration [339-349].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between AI as an equalizer and the risk of concentration of power is discussed in inclusive AI literature that stresses design choices to prevent widening gaps [S14], alongside policy notes that inclusion cannot be assumed and must be deliberately engineered [S10, S13].
MAJOR DISCUSSION POINT
AI and inequality
Argument 7
Training data bias is a risk, yet AI can help identify and balance such biases
EXPLANATION
Takhar (as audience) notes that while AI models can inherit bias from training data, they also possess the capability to surface and correct those biases, offering a path toward more equitable outcomes.
EVIDENCE
He mentions the strong risk of bias from inputs but also that the model can know the user and reason through inputs to provide balanced results [215-216].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risk‑based AI policy documents acknowledge bias in training data while highlighting AI’s capacity to surface and correct such biases, emphasizing inclusive design principles [S10, S14].
MAJOR DISCUSSION POINT
Bias mitigation
Agreements
Agreement Points
AI will dramatically boost productivity, GDP growth and broaden economic inclusion
Speakers: Vijay Shekhar Sharma, Harinder Takhar
AI as productivity driver and GDP booster AI enables personalized services, expanding economic inclusion
Both speakers argue that AI will raise per-person productivity, allowing small entrepreneurs to scale (e.g., a shopkeeper running multiple shops) and thereby lift GDP, while also delivering highly personalised services that deepen inclusion across finance, health and education [9-12][12][108-119][165-169].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with broad consensus that AI drives economic growth and inclusion, as highlighted in discussions on AI’s transformative potential for development [S44] and analyses noting AI’s capacity to democratize access and reduce inequality [S58].
Viable business models and skilled teams matter more than massive capital outlays for AI development
Speakers: Vijay Shekhar Sharma, Harinder Takhar
Size of investment is less critical than having viable business models and skilled teams Risks include granting AI full control over payments; safeguards and public‑good distribution are essential
Sharma stresses that the key question is whether there is a sustainable business model and the right talent, not the amount of money invested [64-66], while Takhar challenges the notion that only huge funding enables AI, emphasizing model viability and skill over capital [58-61][62-70].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions emphasize talent and business model focus over sheer capital, echoing calls for capability development and domain-specific models rather than large compute investments [S48] and recommendations for middle powers to prioritize application-oriented AI over expensive frontier models [S55].
Educational institutions will persist for their social role; AI will augment but not replace schools
Speakers: Vijay Shekhar Sharma, Harinder Takhar
Schools retain their social role; AI enhances learning but does not replace the institution Schools retain their social role; AI enhances learning but does not replace the institution
Both agree that schools are not redundant because they provide essential social interaction and networking, even as AI changes teaching methods and delivery [151-156][160-166][162-169].
POLICY CONTEXT (KNOWLEDGE BASE)
Education-focused AI policy stresses that schools remain essential social institutions while AI serves as an augmenting tool, reflected in analyses of AI’s role in education during the pandemic and emerging AI-driven attendance systems [S52][S50].
AI can mitigate bias in financial decisions and expand financial inclusion
Speakers: Vijay Shekhar Sharma, Harinder Takhar
AI can remove bias in financial decisions, broaden financial inclusion, and improve healthcare outcomes AI enables personalized services, expanding economic inclusion
Sharma gives concrete examples of AI eliminating known and unknown biases in loan approvals and offering personalised financial advice to low-income users [98-104][108-119], while Takhar highlights AI’s capacity to deliver personalised, inclusive services across sectors [165-169].
POLICY CONTEXT (KNOWLEDGE BASE)
Financial sector guidelines highlight AI’s potential to broaden fair access to services when responsibly governed, noting that AI can turn digital footprints into inclusive finance while addressing bias risks [S37][S38].
Similar Viewpoints
Both see AI as a catalyst for higher productivity and broader inclusion, enabling small businesses to scale and delivering tailored services that reach underserved populations [9-12][12][108-119][165-169].
Speakers: Vijay Shekhar Sharma, Harinder Takhar
AI as productivity driver and GDP booster AI enables personalized services, expanding economic inclusion
Both argue that the decisive factor for AI success is a sustainable business model and talent, not the sheer size of funding [64-66][58-61][62-70].
Speakers: Vijay Shekhar Sharma, Harinder Takhar
Size of investment is less critical than having viable business models and skilled teams Size of investment is less critical than having viable business models and skilled teams
Both maintain that schools will continue to exist for their social and networking functions, even as AI transforms pedagogical methods [151-156][160-166][162-169].
Speakers: Vijay Shekhar Sharma, Harinder Takhar
Schools retain their social role; AI enhances learning but does not replace the institution Schools retain their social role; AI enhances learning but does not replace the institution
Both see AI as a tool to reduce bias and deliver personalised, inclusive services in finance and health, thereby advancing inclusion [98-104][108-119][165-169].
Speakers: Vijay Shekhar Sharma, Harinder Takhar
AI can remove bias in financial decisions, broaden financial inclusion, and improve healthcare outcomes AI enables personalized services, expanding economic inclusion
Unexpected Consensus
Capital intensity vs business model focus for AI ventures
Speakers: Vijay Shekhar Sharma, Harinder Takhar
Size of investment is less critical than having viable business models and skilled teams Size of investment is less critical than having viable business models and skilled teams
While many narratives stress multi-billion-dollar funding as essential for AI, both speakers converge on the view that sustainable models and talent are the real drivers, a stance that runs counter to common expectations about AI financing [64-66][58-61][62-70].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on AI venture strategy cite policy advice that prioritises sustainable business models and skilled teams over capital-heavy model training, mirroring recommendations for small, domain-specific AI and avoiding over-investment in large foundation models [S48][S55].
Persistence of schools despite AI disruption
Speakers: Vijay Shekhar Sharma, Harinder Takhar
Schools retain their social role; AI enhances learning but does not replace the institution Schools retain their social role; AI enhances learning but does not replace the institution
Despite widespread speculation that AI could render traditional education obsolete, both speakers affirm that schools will continue to exist for their social and networking functions, an unexpected alignment given the hype around AI-only education models [151-156][160-166][162-169].
POLICY CONTEXT (KNOWLEDGE BASE)
International education forums reaffirm that schools will continue to play a core societal role, with AI tools designed to support rather than replace traditional schooling structures [S52][S50].
AI as an equaliser rather than a source of new inequality
Speakers: Vijay Shekhar Sharma, Harinder Takhar
AI can remove bias in financial decisions, broaden financial inclusion, and improve healthcare outcomes AI enables personalized services, expanding economic inclusion
Both speakers portray AI primarily as an inclusive technology that can narrow gaps, whereas many debates focus on AI exacerbating inequality; this shared optimistic view is therefore somewhat unexpected [98-104][108-119][165-169].
POLICY CONTEXT (KNOWLEDGE BASE)
Several policy papers frame AI as a potential equaliser that can democratise expertise and reduce inequality, while also calling for balanced regulation to safeguard this outcome [S57][S58].
Overall Assessment

The discussion shows strong convergence among the speakers on AI’s potential to drive productivity, economic growth and inclusive services, the primacy of viable business models and skilled talent over sheer capital, and the continued relevance of institutions like schools and banks even as AI reshapes their interfaces. There is also shared optimism that AI can mitigate bias and reduce inequality.

High consensus on the strategic role of AI for growth and inclusion, moderate consensus on risk mitigation and institutional evolution. This suggests policy focus should prioritize enabling environments for AI adoption, support for SME‑centric models, investment in AI‑augmented education, and safeguards to ensure inclusive, bias‑aware deployment.

Differences
Different Viewpoints
Scale of investment versus focus on business models and talent
Speakers: Harinder Takhar, Vijay Shekhar Sharma
Size of investment is less critical than having viable business models and skilled teams Risks and capital requirements for AI development; need for substantial funding
Harinder argues that the emphasis on having ₹10,000 crore (or more) to compete in AI is misplaced and that the real question is whether sustainable business models and skilled teams exist [58-61][62-70]. Vijay acknowledges large capital deployments (e.g., ₹10,000-₹25,000 crore for Paytm QR) but also says the key question is the business model, yet he stresses that funding is needed to build models and infrastructure [62-64][64-70]. The two speakers therefore disagree on the relative importance of massive capital versus business-model/skill focus.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension mirrors policy guidance that warns against over-reliance on massive funding, advocating instead for talent development and viable business models as the primary drivers of AI impact [S48][S55].
Strategic focus on AI model development – foundation models vs vertical domain‑specific models
Speakers: Harinder Takhar, Vijay Shekhar Sharma
Emphasis on building many vertical‑specific models to solve concrete industry problems Need to build an indigenous foundation model to move up the value chain and embed Indian cultural knowledge
Harinder (through audience questions) pushes for building multiple vertical models tailored to finance, agriculture, health, etc., rather than a single massive foundation model [84-87][85-89]. Vijay counters that India must create its own foundation model to shift from a services-driven economy, reduce cultural bias, and prove capability, insisting on a strategic national effort on foundation models [34-50][53-56][199-213]. This reflects a disagreement on the primary development pathway for AI in India.
POLICY CONTEXT (KNOWLEDGE BASE)
Governments and industry bodies debate between large foundation models and vertical, agile models, with examples from Italy’s shift to sector-specific foundations [S46] and India’s call for both national foundation and domain models [S49][S48].
Impact of AI on inequality – equaliser vs potential concentration of power
Speakers: Harinder Takhar, Vijay Shekhar Sharma
AI could widen inequality if power concentrates in few hands; risk of bias from inputs AI is an inclusive technology that will reduce the gap between rich and poor
Harinder highlights the risk that AI may exacerbate inequality through bias and concentration of power, noting strong bias risk from training inputs [215-216] and questioning whether AI will truly be inclusive. Vijay argues that AI is a super-power that will level the playing field, making sophisticated tools accessible to everyone and thereby narrowing rich-poor gaps [339-349]. The speakers thus diverge on whether AI will be a net equaliser or a risk for greater disparity.
POLICY CONTEXT (KNOWLEDGE BASE)
The discourse reflects contrasting policy analyses: some warn AI may concentrate capital and widen gaps [S42], while others argue it can act as an equaliser if governed responsibly [S57][S58].
Unexpected Differences
Perception of AI‑related risk – Vijay downplays risk while Harinder emphasizes bias and concentration
Speakers: Vijay Shekhar Sharma, Harinder Takhar
AI risk is minimal and comparable to everyday activities; focus on distribution as public good Strong risk of bias from inputs; need for careful governance
Both speakers are generally pro-AI, yet they diverge sharply on risk assessment. Vijay treats AI risk as comparable to crossing a road, suggesting it is low and manageable [353-357], and stresses public-good distribution [355-368]. Harinder, however, points out a “very strong risk of bias from the inputs” and stresses the need for balanced, inclusive design [215-216]. This contrast was not anticipated given their overall supportive stance toward AI.
POLICY CONTEXT (KNOWLEDGE BASE)
Risk-focused AI policy literature stresses the need for harm-prevention frameworks, explicit governance of bias, and systematic risk assessment, as outlined in risk-based AI guidelines for finance and broader AI governance recommendations [S36][S45][S43].
Overall Assessment

The discussion shows limited outright conflict but several substantive divergences: (1) the relative importance of massive capital versus business‑model and talent focus; (2) whether India should prioritize a national foundation model or a suite of vertical models; (3) the net effect of AI on inequality, with one side viewing it as a great equaliser and the other warning of bias and concentration. While both speakers share a common vision of AI as a growth and inclusion driver, they differ on strategic pathways and risk perception.

Moderate – disagreements are focused on strategic emphasis and risk framing rather than fundamental opposition to AI. These differences suggest that policy and investment decisions will need to reconcile capital‑driven ambitions with model‑development strategies and incorporate robust governance to address bias concerns, influencing how India’s AI agenda is shaped.

Partial Agreements
Both agree that AI is a catalyst for higher productivity and inclusion – Vijay emphasises AI‑first products raising per‑person productivity and GDP growth [9-12][12]; Harinder stresses AI’s ability to provide highly personalised access across finance, health and education, creating radical societal impact [165-169]. However, they differ on the primary mechanism: Vijay focuses on national‑scale foundation models and macro‑economic effects, while Harinder foregrounds business‑model viability and sector‑specific personalization.
Speakers: Vijay Shekhar Sharma, Harinder Takhar
AI will boost productivity and economic growth in India AI will enable personalized services and deepen inclusion
Both concur that banks and schools will not become obsolete – Vijay argues that core banking functions (credit, deposits) will persist even as interfaces change [137-151]; Harinder affirms the social value of schools beyond classroom learning [162-169]. Their agreement lies in preserving institutional core functions while acknowledging AI‑driven transformation.
Speakers: Vijay Shekhar Sharma, Harinder Takhar
Institutions like banks and schools will evolve rather than disappear Schools retain social role despite AI changes
Takeaways
Key takeaways
AI is seen as a major productivity driver that can significantly boost India’s GDP and position the country as a global AI leader. India must develop its own foundation models to move up the value chain, reduce cultural and linguistic bias, and demonstrate indigenous capability. The focus should be on creating multiple vertical‑specific models (finance, agriculture, healthcare, etc.) rather than a single massive model. AI can remove bias in financial decisions, expand financial inclusion, improve healthcare outcomes, and provide highly personalized services across sectors. Future user interaction will shift to an agent‑first paradigm where AI agents communicate with each other and with platforms, enabling seamless services. Traditional institutions such as banks and schools will evolve and be augmented by AI rather than become obsolete. Education systems need to adapt to teach AI tool usage, foster curiosity, and make AI accessible to tier‑3/4 students and non‑technical professionals. Risks include granting AI full control over payments, data bias, and potential inequality; safeguards and public‑good distribution are essential. Inclusive design of AI can act as a leveler, reducing the gap between rich and poor if deployed responsibly.
Resolutions and action items
Commit to building indigenous Indian foundation models, with multiple teams launching their own models (e.g., Sarvam’s model). Prioritize development of vertical‑specific AI models for finance, agriculture, healthcare, and other sectors. Collaborate with industry stakeholders and regulators to obtain data for training domain‑specific LLMs while respecting compliance. Diffuse AI capabilities to small and micro merchants through targeted AI tools and platforms. Develop and distribute low‑cost AI hardware (e.g., AI sound box) to ensure wide access. Encourage developers to design agent‑first, non‑icon based interfaces for new applications. Promote AI literacy and tool usage among tier‑3/4 students and non‑technical workers as a core skill.
Unresolved issues
Specific business models and funding mechanisms for sustaining Indian foundation models remain undefined. Detailed framework for handling regulated data and ensuring compliance in industry‑specific LLM training is not settled. Concrete policies or technical safeguards to prevent AI from gaining full autonomous control over payment accounts are lacking. Whether stock‑brokerage platforms will become fully AI‑native or retain AI as a feature was not conclusively answered. Exact curriculum or step‑by‑step strategy for tier‑3/4 students to become proficient in AI was not provided. Mechanisms for AI agent authentication, ad exposure, and interaction with existing platforms need further clarification. Implementation plan for making AI a public good and addressing distribution equity remains open.
Suggested compromises
Emphasize skill development and viable business models over the magnitude of financial investment for AI projects. Support the creation of many foundation models rather than a single dominant model to avoid monopoly concerns. Position AI as an augmenting layer for banks and schools, preserving their core functions while enhancing services. Balance the pursuit of AI‑driven efficiency with safeguards that prevent full autonomous control over critical financial operations.
Thought Provoking Comments
India has to build a foundation model – not because we can make a better financial model, but because we must move up from a services‑only culture and prove we can create world‑class AI.
It reframes AI from a technical curiosity to a strategic national imperative, linking it to India’s economic transition from low‑value services to high‑value knowledge creation.
Triggered a series of follow‑up questions about whether India should develop its own chips, models, and applications; shifted the conversation from generic AI hype to concrete nation‑building priorities.
Speaker: Vijay Shekhar Sharma
Problems don’t disappear, they just shift – like the road‑block moving from Nehru Place to the airport.
Uses a vivid metaphor to explain how AI removes existing bottlenecks but creates new ones elsewhere, highlighting the dynamic nature of productivity gains.
Prompted participants to think about secondary effects of AI adoption, leading to discussions on where new challenges will emerge (e.g., data bias, regulatory hurdles).
Speaker: Vijay Shekhar Sharma
Should we all make a foundation model in India? Should we make our own chips? Should we make our own applications?
Poses three concrete strategic options, forcing the panel to prioritize between foundational research, hardware, and downstream products.
Served as a turning point that moved the dialogue from abstract benefits to concrete ecosystem decisions, prompting Vijay’s detailed response on foundation models and later discussions on vertical vs. horizontal use‑cases.
Speaker: Harinder Takhar
Models are not just Q&A; they can reason, solve problems, and act as agents.
Expands the definition of LLMs beyond chatbots, introducing the concept of agency and autonomous decision‑making.
Steered the conversation toward building “agent‑first” interfaces and vertical, purpose‑built models, influencing Vijay’s later analogies about engines and vehicles.
Speaker: Audience member
AI can remove bias in financial decisions – for example, a loan officer’s subjective bias disappears when a machine evaluates the application.
Highlights a concrete, socially impactful use‑case where AI improves fairness, linking technology to inclusive growth.
Led to audience validation of the idea, deepening the discussion on financial inclusion and prompting Vijay to illustrate further with personal anecdotes about credit advice for low‑income users.
Speaker: Vijay Shekhar Sharma
AI will enable the next level of financial inclusion: an auto‑rickshaw driver with ₹2‑5 lakh could get personalized investment advice in his native language via a chatbot.
Provides a relatable, ground‑level scenario that demonstrates how AI can democratize sophisticated financial planning.
Shifted the tone from macro‑economic speculation to everyday user experience, encouraging other participants to envision sector‑specific AI products (agri, health, etc.).
Speaker: Vijay Shekhar Sharma
Banks and schools will not become redundant; they provide trust, social interaction, and regulatory functions that a pure app cannot replace.
Challenges the common narrative that AI will eliminate traditional institutions, offering a nuanced view of technology as an augment rather than a replacement.
Balanced the earlier optimism, prompting a broader debate on what aspects of legacy institutions are irreplaceable and how AI can complement them.
Speaker: Vijay Shekhar Sharma
We should move to agent‑first interfaces where an AI agent talks to another agent (e.g., Uber’s agent) instead of a human logging in each time.
Introduces a futuristic interaction model that reimagines authentication, commerce, and service delivery as machine‑to‑machine conversations.
Generated excitement and speculation about new business models, influencing later audience questions about AI‑native brokerage platforms and the need for non‑icon based app designs.
Speaker: Vijay Shekhar Sharma
AI is an inclusive super‑power that can reduce the gap between rich and poor because anyone can use it in their native language without needing high‑end hardware.
Posits AI as a leveling technology, directly addressing concerns about widening inequality.
Reinforced earlier points about financial inclusion, prompted audience to ask about risks and distribution, and shaped the overall optimistic narrative of the session.
Speaker: Vijay Shekhar Sharma
The future skill is not programming but learning how to use AI as an extension of whatever education you have; curiosity driven by AI will be the real super‑power.
Shifts the focus from traditional curriculum to AI‑augmented learning, offering guidance for students from tier‑3/4 backgrounds.
Closed the discussion with actionable advice, resonating with audience concerns about employability and prompting follow‑up questions on how under‑resourced students can succeed.
Speaker: Vijay Shekhar Sharma
Overall Assessment

The discussion was steered by Vijay Shekhar Sharma’s strategic framing of AI as a national growth engine and his concrete, relatable examples (financial bias removal, low‑income investment advice). Harinder’s probing question about building foundation models acted as a catalyst, moving the dialogue from abstract optimism to concrete ecosystem choices. Audience inputs that broadened the definition of models and highlighted agency introduced technical depth, while Vijay’s counter‑arguments about the enduring role of banks and schools balanced the hype. Collectively, these pivotal comments redirected the conversation toward actionable pathways—building indigenous models, focusing on vertical use‑cases, and re‑skilling the workforce—while maintaining an overarching narrative that AI can be an inclusive, inequality‑reducing force for India.

Follow-up Questions

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote by Dr. Pramod Varma Co-founder & Chief Architect NFH India AI Impact Summit

Keynote by Dr. Pramod Varma Co-founder & Chief Architect NFH India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Speaker 1 framing the discussion around the integration of artificial intelligence (AI) with India’s digital public infrastructure (DPI) and its potential impact on scale, opportunities and risks [1][94-99]. Pramod Varma highlighted that the recent event demonstrated a shift from an “elite, exclusive” gathering in Paris to a more inclusive audience of students, children and young entrepreneurs, signalling true democratization of AI [10-13]. He traced India’s advantage to a decade-long investment that brought a billion people into the formal system through universal identity (Aadhaar), bank accounts, paper-less signatures and the Unified Payments Interface (UPI) [28-33]. Complementary programmes such as GST invoicing, fast-tag tolling and the GST portal created billions of machine-readable, cryptographically signed records that constitute a “goldmine” of data [38-44]. All of these DPI components are exposed as APIs, making the underlying infrastructure programmable, composable and ready for AI-driven services [45]. Varma argued that when such verifiable data trails are owned by individuals under the DPDP privacy law, AI can leverage them to generate exponential economic gains, predicting that countries with DPI-plus-AI will outperform others by ten- to fifty-fold [46-50]. He also noted that India’s political will, regulatory support and technical readiness have converged in the past ten years, creating a fertile environment for AI diffusion [50]. The speaker praised the nation’s “young adventurous entrepreneurs” who are eager to tackle the myriad problems across energy, agriculture and other sectors [51-57]. He cited the growth of the startup ecosystem-from roughly 1,000 firms in 2016 to about 100,000 today-and projected one million startups by 2035, emphasizing that even unsuccessful attempts contribute to innovation [81-85]. Varma concluded that coupling DPI with AI through entrepreneurship will amplify problem-solving capacity and drive new product and service creation [80-85]. The subsequent panel is tasked with exploring how AI can be embedded in DPI, what opportunities and risks arise, and how the architecture can mitigate those risks [95-99]. Overall, the discussion positioned India’s digital infrastructure as a unique foundation that, when combined with AI, could unlock scalable benefits while requiring careful governance [45-50][95-99].


Keypoints

India’s digital public infrastructure (DPI) provides the foundation for AI diffusion.


The speaker highlights the nation-wide rollout of Aadhaar, eSign, DigiLocker, UPI, GST invoicing, FastTag and other API-based services that have turned a billion people into “visible” participants with a verifiable, machine-readable data trail. [28-33][38-44][45-48]


Programmability and composability of DPI make AI especially powerful for India.


By combining AI’s two key ingredients-programmability and composability-with the country’s programmable, API-driven DPI, the speaker predicts that nations that layer AI on top of DPI could achieve 10-50× better economic outcomes. [49-50]


A youthful, risk-taking entrepreneurial ecosystem is seen as the engine for AI-driven problem solving.


The talk stresses India’s “young adventurous entrepreneurs” who are eager to launch startups to tackle the country’s many challenges (energy, agriculture, etc.), noting growth from 1,000 firms in 2016 to 100,000 today and a projection of one million startups by 2035. [51-55][80-84]


The upcoming panel will examine opportunities, risks, and new market ecosystems from integrating AI into DPI.


The moderator introduces the panel’s focus on how AI-enabled DPI can unlock scale, create new products and services, and address emerging risks. [94-99]


Overall purpose/goal:


The discussion aims to showcase India’s unique readiness-through extensive, programmable digital public infrastructure and a vibrant startup culture-to democratize and scale AI, and to set the stage for a deeper panel exploration of the benefits, challenges, and ecosystem opportunities that arise when AI is embedded in DPI.


Overall tone:


The speaker’s tone is upbeat, celebratory, and forward-looking, emphasizing “democratization,” “serendipity,” and “bold predictions.” It remains optimistic throughout, shifting near the end from a personal, enthusiastic keynote to a more formal hand-off to the panel, but the underlying positivity and call to action persist.


Speakers

Speaker 1


– Role/Title: Event moderator / host introducing speakers[S1][S3]


– Area of Expertise:


Pramod Varma


– Role/Title: Dr., Co-founder & Chief Architect, NFH India; Keynote speaker at AI Impact Summit[S4][S5]


– Area of Expertise: Artificial Intelligence, Digital Public Infrastructure, AI policy and implementation[S4]


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

Speaker 1 opened the session with a brief hand-off, introducing Pramod Varma as a prominent expert on the country’s infrastructure and signaling the start of his keynote [1-2].


Pramod Varma began by apologising for taking the audience’s time on a Friday evening and then congratulated the Government of India, the Ministry of Electronics and Information Technology (METI) and the Ministry of External Affairs (MEA) for their support [3-5].


He contrasted this gathering with the previous elite-only event in Paris, noting that today students, children and young entrepreneurs were present, which he described as a genuine democratisation of AI in India [6-9].


Varma highlighted strong political backing, characterising the Prime Minister as, in his view, a “mastermind” behind India’s AI diffusion efforts [10-11].


Moving beyond the hype around large language models, he stressed that AI’s relevance spans many domains and referenced his own master’s degree in AI earned in 1989, underscoring his authority on the subject [12-14].


He then traced India’s decade-long digital investment: Aadhaar was launched in 2000, and 2014 marked a seminal year when he helped architect eSign, DigiLocker and the Unified Payments Interface (UPI) – foundational API-driven services that brought a billion people “from invisible to the system” [15-18]. Subsequent programmes such as GST invoicing, FastTag tolling and the GST portal generated billions of machine-readable, cryptographically signed records, while the GPI (payment-gateway-interoperability) initiative provided a concrete inclusion story [19-24].


All DPI components are exposed as APIs, making the infrastructure inherently programmable and composable, a point reinforced by the external source on API-first design [S44].


Varma explained that the DPDP privacy Act returns data control to individuals and small businesses, ensuring that AI can operate on trustworthy, citizen-owned data – a right further detailed in the external privacy reference [25-27][S9].


He argued that the combination of programmability, composability and verifiable data trails could deliver economic gains ten to fifty times larger than in countries lacking such DPI foundations [28-30].


Turning to human capital, he noted that India enjoys access to capital, investment and the right products, and praised “young adventurous entrepreneurs” who are eager to tackle challenges in sectors such as energy and agriculture. He cited the rapid expansion of the startup ecosystem-from roughly 1 000 firms in 2016 to about 100 000 today, and a projection of one million startups by 2035 – illustrating how even unsuccessful attempts fuel the nation’s innovative momentum [31-38].


He repeatedly urged the audience, especially young people, to make audacious, bold attempts at solving problems, emphasizing that bold experimentation is essential for progress [39-41].


Concluding his remarks, Varma thanked the audience and handed the discussion over to the panel, inviting continued imagination, building and problem-solving [42-44].


Speaker 1 then formally re-introduced the panel, outlining the focus on how AI-enhanced Digital Public Infrastructure can unlock large-scale benefits, create new market ecosystems, and what safeguards are required to manage emerging risks [45-48].


Session transcriptComplete transcript of the session
Speaker 1

…infrastructure in the country. He’s a prominent expert on open source, scalable digital systems and decentralized networks. It is now my honor to call upon Pramod to take the stage to give his keynote address. Thank you.

Pramod Varma

Friday evening can be really hard. It’s tiring right after a long week. So thank you for having me here and I don’t want to take up too much of your time. First of all, I want to congratulate Government of India, METI, MEA. What a fantastic week. And compared to last time in Paris, we heard actually from many people who attended that last time it was elite, exclusive people attending it. This is true democratization. You can see that number of students, children, entrepreneurs, young entrepreneurs walking in. It just tells you that… India can definitely demonstrate what it means to democratize and diffuse AI. And our prime minister is, I think, a mastermind at it. So he’s a great supporter of it.

But what I wanted to give you about five minutes or so is that why India is peculiarly in advantage of diffusing AI. Now, we have two arguments we can make. Our own LLM. I think much of our discussions and today AI discussions are all about sovereign LLM, big LLM. How are we going to build our own LLM? LLM is only one part of it. There’s so much more there to AI, especially for the people who have lived. My master’s was in AI. I was in 89. So. AI has been there for a while. I think now it’s all coming together. But AI spans much beyond LLMs and why India is peculiarly set up to succeed is because of the serendipity, but it is because of the investment we made in the last decade, digital investment.

And people who have not looked at the macro picture, it’s very important to understand India over the last decade brought a billion people from invisible to the system. They were invisible to the system to being visible to the system. And we formalized a billion people by giving everyone an identity, everyone a bank account, everyone can transact. Make payments, paperless signature. So we built Aadhaar, begin with. Of course we built in 2000, I remember 2014 was seminal for us because I was actually architecting eSign, DigiLocker and UPI at the same time. And who knew they were all going to play out. But I think brave people are also lucky. I think when we attempt something bold and audacious, sometimes luck comes in the way and Indians have truly embraced all this into actually at population scale, in one sense going beyond what we can.

And it did not stop there though. We actually digitized businesses through GST. India is the only country where we have billions of invoices, actual proof of purchase in machine -readable, cryptographically protected, digitally designed fashion. That’s like a goldmine. That’s each of those steps we made. Or fast tag. When fast tag gets done in the road, there’s a proof of transport, an eBay bill. Each of them is again machine -readable, cryptographically signed and usable by the next layer of innovation. So what we did with GPI by formalizing is one inclusion story. It was a brilliant inclusion story to get everyone into the formal system. but it also said you know serendipity set up the most powerful two ingredients for AI data and programmability every one of our infrastructure components DPI components are API based every one of them this is why we have fun pay is why we have the road and grow and everyone else building applications and workflows using this underlying digital public infrastructure API’s identity API’s verification digital occur verification a document verification API’s he signed for paperless signature UPI and mandates for recurring payments and other collections or payments each of them is programmable combining that with data that gets in later a billion people billion plus people you in India generate verifiable data trail.

And that’s beautiful. But even more beautiful when it is controlled and owned by the individuals, which is our DPDP Act actually giving you. Our privacy bill is giving us the right to control our own data. And India has truly demonstrated that the data belongs to the people, data belongs to the small businesses, using which now they can create a virtual cycle. So I think AI’s two biggest ingredients, programmability and composability, combined with data, verifiable data trail, allows India, and this is a bold prediction I’m making, 10 years later, when you compare countries’ economic progress and growth, countries who have invested in DPI and combined, AI on top of DPI, would have done 10x or 50x better than countries who have no underlying infrastructure.

So I think India is lucky, right place, right political will, right regulatory push, right infrastructure readiness, all in the last decade, all in one decade. But for my favorite part of all that thing is that India is also blessed with young adventurous entrepreneurs. Entrepreneurs who have no inhibition at all. At least a few of you came to meet me outside saying I’m starting a company. It’s just music to our ears because India’s problems are a plethora. As you know, we are a country of problems. So we have anywhere you look, we see problems. Energy sector, agriculture, agriculture. We have a lot of problems. We have a lot of problems. We have a lot of problems.

We have a lot of problems. We have a lot of problems. a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of We have a lot of problems. We have a lot of problems. We have a lot of problems. We have access to capital, access to investment, access to the right products, not solved.

We have much to solve. And if you combine our infrastructure and diffuse AI, but diffuse AI through entrepreneurship. The way we diffuse DPI through entrepreneurship, we went from 1 ,000 companies in 2016 to 100 ,000 startups today. And the prediction is that we’ll get 1 million startups by 2035. It’s a very high chance we’ll get. Doesn’t mean all of them will succeed. But attempting matters. I think young people have to attempt, audacious attempt, bold attempt to solve problems. And India has beautifully set up. And we have a wonderful panel. I don’t want to take up too much of time. Wonderful panel talking about the combinatorial power of DPI and AI. Combining both what can be really an exponential power and why countries who are investing, and they’re all global, and they’re all global, experts in deeply investing into DPI.

So I give my floor to them. Thank you. Thank you to all of you too, even if so many people coming and sitting, really appreciate it, much appreciate and a wonderful weekend and keep imagining and keep building and keep solving. Thank you so much.

Speaker 1

Thank you so much for setting that context. Now we will have the panel on AI and digital public infrastructure. The session will explore how integrating AI into DPI can unlock new benefits at scale while also discussing the challenges and risks of such an integration. How can DPI architecture mitigate new risks and emerge as AI becomes embedded in foundational digital systems? What are the opportunities and risks that emerge as a result of integrating AI into DPI? And could integrating AI into DPI enable the development of new products, services and market ecosystems?

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Speaker 1 introduced Pramod Varma as a prominent expert on the country’s infrastructure at the start of his keynote”

The knowledge base describes Varma as a prominent expert on open source, scalable digital systems and decentralized networks and notes the moderator calling him to the stage for his keynote [S4] and [S5].

Confirmedhigh

“Varma apologized for taking the audience’s time on a Friday evening”

The transcript excerpt records Varma saying “Friday evening can be really hard… I don’t want to take up too much of your time” [S54].

Confirmedhigh

“Varma congratulated the Government of India, the Ministry of Electronics and Information Technology (METI) and the Ministry of External Affairs (MEA)”

He explicitly congratulates the Government of India, METI and MEA in the same passage [S54].

Confirmedmedium

“He contrasted the gathering with a previous elite‑only event in Paris”

Varma references “compared to last time in Paris,” indicating the earlier event was different, likely more exclusive [S54].

Additional Contextmedium

“Varma characterised the Prime Minister as a “mastermind” behind India’s AI diffusion efforts”

While the knowledge base does not use the term “mastermind,” it notes strong governmental backing for advanced digital initiatives (e.g., 6G) that cite the Prime Minister’s support, showing high-level endorsement of technology policy [S6].

External Sources (59)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Keynote by Dr. Pramod Varma Co-founder & Chief Architect NFH India AI Impact Summit — -Moderator: Session moderator (no specific expertise, role, or title mentioned beyond moderating the discussion) And it…
S5
Keynote by Dr. Pramod Varma Co-founder & Chief Architect NFH India AI Impact Summit — 1200 words | 146 words per minute | Duration: 490 secondss Friday evening can be really hard. It’s tiring right after a…
S6
Designing Indias Digital Future AI at the Core 6G at the Edge — Radhakant acknowledges strong governmental backing for 6G, citing support from the Prime Minister, the VARA 6G Alliance,…
S7
Building Indias Digital and Industrial Future with AI — So last year, the bank came up with a digital public infrastructure and development report where it articulated what it …
S8
Building Indias Digital and Industrial Future with AI — Evidence:Unlike commercial solutions that involve patents, copyrights, and scaling fees, India’s DPI is offered as open …
S9
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And accessibility has to be also broadened in terms of multi -modality and also, where necessary, include a human in the…
S10
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Thanks for the question. You’re right, I think those three words are very key. When you’re talking from a government per…
S11
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S12
Shaping the Future AI Strategies for Jobs and Economic Development — Thank you. Thank you so much, Ina. Thank you all for being here. Well, AI as a concept evokes this notion of leapfroggin…
S13
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S14
Creating Eco-friendly Policy System for Emerging Technology — Additionally, the analysis embraces a more globalised, holistic approach to learning. It backs strategies that encourage…
S15
Driving Indias AI Future Growth Innovation and Impact — How do you? Build the trust like we just discussed to ensure that there is that. the ecosystem knows that this entire pr…
S16
Scaling Innovation Building a Robust AI Startup Ecosystem — The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with the awards cer…
S17
How AI Drives Innovation and Economic Growth — Rodrigues emphasizes that while early AI discussions were dominated by fear about job displacement and technological thr…
S18
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S19
Open Forum #30 High Level Review of AI Governance Including the Discussion — Abhishek Singh: Thank you. Thank you, Yoichi, and thank you for highlighting this very, very important issue of AI gover…
S20
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Robert Opp from UNDP emphasized that the population-scale reach of DPI amplifies both opportunities and risks. His key i…
S21
Building Indias Digital and Industrial Future with AI — Moderate disagreement with constructive tensions that reflect different perspectives on balancing national sovereignty w…
S22
Empowering People with Digital Public Infrastructure — As AI becomes more integrated into DPI, there’s a need to balance the benefits of AI with data privacy and security conc…
S23
Building Indias Digital and Industrial Future with AI — Disagreement level:Moderate disagreement with constructive tensions that reflect different perspectives on balancing nat…
S24
High-Level Dialogue: The role of parliaments in shaping our digital future — Countries must navigate the challenge of implementing strong data protection laws while still fostering an environment t…
S25
Multistakeholder Dialogue on National Digital Health Transformation — Leosk emphasizes the importance of having strong governance mechanisms and legal frameworks to protect data privacy. She…
S26
Strengthen Digital Governance and International Cooperation to Build an Inclusive Digital Future — Gurry explains that there is an increasing gap between when new technologies appear and are adopted versus when governme…
S27
AI for Social Empowerment_ Driving Change and Inclusion — She argues that immediate policy action is required across competition, tax, labour and social protection to mitigate AI…
S28
Keynote by Uday Shankar Vice Chairman_JioStar India — This comment introduces a crucial strategic perspective that changes the entire competitive analysis. Instead of viewing…
S29
Secure Finance Risk-Based AI Policy for the Banking Sector — The panel examined different global approaches to AI regulation, contrasting innovation-led American models, compliance-…
S30
Building the Next Wave of AI_ Responsible Frameworks & Standards — The Moderator argues that India operates in contexts that most of the developing world shares – multilingual populations…
S31
Keynote-Rishad Premji — This comment transforms the discussion by repositioning India’s challenges as strengths. It provides the logical foundat…
S32
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — According to a policy brief by the UN Secretary, DPI has the potential to contribute to the SDGs by ensuring safe data u…
S33
The future of Digital Public Infrastructure for environmental sustainability — As digitalisation is perceived with positivity, the integration of DPI is anticipated to positively influence climate ac…
S34
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S35
Building Population-Scale Digital Public Infrastructure for AI — Irina Ghose from Anthropic reinforced this perspective, arguing that AI deployment failures rarely stem from technical c…
S36
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Saibal argues that India is approaching AI with the same ethos as DPI – treating it as shared public infrastructure that…
S37
Keynote-Rishi Sunak — Evidence:The India Stack has shown people how technology can benefit them in their everyday lives. This digital public i…
S38
Building Population-Scale Digital Public Infrastructure for AI — And why would we need a hub like this to do that? Well, one of the big barriers that we are currently seeing is the frag…
S39
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And accessibility has to be also broadened in terms of multi -modality and also, where necessary, include a human in the…
S40
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Saibal argues that India is approaching AI with the same ethos as DPI – treating it as shared public infrastructure that…
S41
AI for Social Good Using Technology to Create Real-World Impact — Thanks, James. Good morning. Just so we’re all clear, there’s a lot of intellectual horsepower on the stage, and it’s al…
S42
Building Indias Digital and Industrial Future with AI — So last year, the bank came up with a digital public infrastructure and development report where it articulated what it …
S43
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S44
Keynote by Dr. Pramod Varma Co-founder & Chief Architect NFH India AI Impact Summit — India brought a billion people from being invisible to visible in the system through digital formalization over the last…
S45
Shaping the Future AI Strategies for Jobs and Economic Development — Thank you. Thank you so much, Ina. Thank you all for being here. Well, AI as a concept evokes this notion of leapfroggin…
S46
Building Indias Digital and Industrial Future with AI — Summary:All speakers acknowledge India’s leadership in DPI development and its potential for global replication, with em…
S47
Keynote by Dr. Pramod Varma Co-founder & Chief Architect NFH India AI Impact Summit — Entrepreneurial Ecosystem and Problem-Solving We have much to solve. And if you combine our infrastructure and diffuse …
S48
Panel Discussion Next Generation of Techies _ India AI Impact Summit — Arvind argues that while AI represents a new technology wave creating entrepreneurial opportunities, the core requiremen…
S49
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And I have a deep belief that the entrepreneurial ecosystem in India is going to deliver some incredible global leaders …
S50
Multistakeholder Partnerships for Thriving AI Ecosystems — So we have to close the gap. And I would say it’s not an innovation gap, it’s a power gap. Because innovative people are…
S51
WS #257 Emerging Norms for Digital Public Infrastructure — Benefits and Risks of DPI Milton Mueller: Well, I’m going to introduce the topic, and then I’m going to introduce the …
S52
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Oluwaseun Adepoju: Thank you so much. Quickly, when I mentioned earlier that there is hype around AI in the early days, …
S53
Open Forum #30 High Level Review of AI Governance Including the Discussion — Abhishek Singh: Thank you. Thank you, Yoichi, and thank you for highlighting this very, very important issue of AI gover…
S54
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-dr-pramod-varma-co-founder-chief-architect-nfh-india-ai-impact-summit — Friday evening can be really hard. It’s tiring right after a long week. So thank you for having me here and I don’t want…
S55
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Ammari highlighted META’s open-source approach to large language models, explaining, “META has adopted an open source me…
S56
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Bikhchandani’s most memorable contribution was his concrete advice for individual adaptation: learn to use three AI plat…
S57
Inclusive AI_ Why Linguistic Diversity Matters — And so the way we work is we work with partners because the core premise is collaboration. Work with partners where we’l…
S58
The State of the model: What frontier AI means for AI Governance — Rus argues that large language models trained on massive datasets provide humans with enhanced capabilities across multi…
S59
Digital Public Infrastructure, Policy Harmonization, and Digital Cooperation — Marie Ndé Sene Ahouantchede explains that ECOWAS views public digital infrastructure as built on three pillars: payment …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
P
Pramod Varma
10 arguments146 words per minute1200 words490 seconds
Argument 1
Inclusive participation shows AI is moving beyond elite circles
EXPLANATION
Pramod highlights that the current AI event attracted a broad audience, including students, children, and young entrepreneurs, contrasting with previous gatherings that were limited to elite participants. This shift signals a democratization of AI access in India.
EVIDENCE
He noted that compared with the previous event in Paris, the current gathering featured many students, children, and young entrepreneurs, indicating a move away from an elite, exclusive audience [10-12].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Varma highlighted that the current event featured many students, children and young entrepreneurs, unlike the previous Paris gathering, indicating broader, more democratic participation [S5].
MAJOR DISCUSSION POINT
Democratization of AI
Argument 2
Prime Minister’s strong support accelerates AI diffusion
EXPLANATION
Pramod asserts that the Prime Minister is a key champion of AI, describing him as a mastermind and great supporter, which helps drive rapid diffusion of AI technologies across the country.
EVIDENCE
He stated that “our prime minister is, I think, a mastermind at it. So he’s a great supporter of it” [15-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He described the Prime Minister as a “mastermind” and a great supporter of AI, crediting this political backing for rapid diffusion of AI technologies [S5].
MAJOR DISCUSSION POINT
Political endorsement of AI
Argument 3
India’s decade‑long digital investments created programmable, API‑based systems
EXPLANATION
He explains that over the past ten years India invested heavily in digital public infrastructure, building systems like Aadhaar, eSign, DigiLocker, and UPI that are API‑driven and programmable, laying a foundation for AI applications.
EVIDENCE
Pramod mentions that the investment made in the last decade, including the creation of Aadhaar, eSign, DigiLocker, and UPI, resulted in programmable, API-based digital services [27-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Varma listed Aadhaar, eSign, DigiLocker, UPI and GST as API-driven digital services built over the past ten years, forming a programmable foundation for AI [S4][S5].
MAJOR DISCUSSION POINT
Digital infrastructure foundation
AGREED WITH
Speaker 1
Argument 4
DPI components generate verifiable, cryptographically signed data trails for AI
EXPLANATION
He describes how digital public infrastructure such as GST invoices, Fastag, and other transaction systems produce machine‑readable, cryptographically signed records, providing high‑quality data that AI can reliably consume.
EVIDENCE
He points out that billions of GST invoices, Fastag transport proofs, and other digital records are machine-readable and cryptographically signed, creating verifiable data trails for further innovation [38-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He emphasized that billions of GST invoices and Fastag transport proofs are machine-readable and cryptographically protected, providing high-quality data trails for AI innovation [S4][S5].
MAJOR DISCUSSION POINT
Data quality for AI
Argument 5
DPDP Act gives individuals control over their data, keeping it owned by people and small businesses
EXPLANATION
Pramod notes that India’s Data Protection and Digital Privacy (DPDP) Act empowers citizens to own and control their personal data, ensuring that data remains with individuals and small enterprises rather than being monopolized.
EVIDENCE
He explains that the DPDP Act and privacy bill grant individuals the right to control their own data, emphasizing that data belongs to people and small businesses [46-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Varma explained that the DPDP Act ensures individuals and small enterprises own and control their personal data, creating a virtuous cycle for innovation [S4][S5].
MAJOR DISCUSSION POINT
Data ownership and privacy
Argument 6
Programmability and composability of DPI enable powerful AI applications
EXPLANATION
He argues that because DPI services are exposed via APIs and can be combined (composed) programmatically, developers can build sophisticated AI‑driven workflows and applications at scale.
EVIDENCE
He highlights that all DPI components are API-based and programmable, allowing composability that fuels AI innovation [45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He noted that all DPI components are exposed via APIs, allowing composable workflows such as PhonePe and other applications built on these services [S4].
MAJOR DISCUSSION POINT
Technical enablement of AI
AGREED WITH
Speaker 1
Argument 7
Countries that combine DPI with AI could achieve 10×–50× higher economic growth
EXPLANATION
Pramod makes a bold prediction that nations integrating AI on top of robust digital public infrastructure will experience ten to fifty times greater economic progress compared with those lacking such foundations.
EVIDENCE
He states that countries investing in DPI and layering AI on top would have done 10x or 50x better economically than those without underlying infrastructure [49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Varma made a bold prediction that nations layering AI on top of robust DPI could outperform others by ten to fifty times economically [S4].
MAJOR DISCUSSION POINT
Economic impact of AI‑DPI synergy
Argument 8
India’s political will, regulatory push, and ready infrastructure position it to reap these gains
EXPLANATION
He emphasizes that India benefits from favorable political conditions, proactive regulation, and a decade‑long buildup of digital infrastructure, creating an optimal environment to capitalize on AI‑driven growth.
EVIDENCE
He describes India as having the right place, political will, regulatory push, and infrastructure readiness, all developed within the last decade [50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He described India’s “right place, right political will, right regulatory push, right infrastructure readiness” as the enabling environment for AI-driven growth [S5].
MAJOR DISCUSSION POINT
Enabling environment for AI growth
Argument 9
Young, adventurous entrepreneurs can leverage DPI and AI to solve India’s many problems
EXPLANATION
Pramod points out that India’s large pool of energetic entrepreneurs, unburdened by inhibition, can use the combination of DPI and AI to address the country’s numerous challenges across sectors.
EVIDENCE
He mentions the presence of young adventurous entrepreneurs and notes that many attendees approached him about starting companies, highlighting the country’s many problems that need solving [51-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Varma highlighted the large pool of energetic entrepreneurs and the rapid scaling of startups as a catalyst for applying DPI and AI to national challenges [S4][S5].
MAJOR DISCUSSION POINT
Entrepreneurial potential
Argument 10
Startup numbers have risen from ~1,000 in 2016 to 100,000 today, with a target of 1 million by 2035, illustrating ecosystem momentum
EXPLANATION
He provides quantitative evidence of rapid growth in India’s startup ecosystem, indicating a strong momentum that could further accelerate AI and DPI integration.
EVIDENCE
He cites the increase from 1,000 companies in 2016 to 100,000 startups today, and projects reaching one million startups by 2035 [81-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He cited quantitative growth from 1,000 startups in 2016 to 100,000 today and a projection of reaching one million by 2035, underscoring ecosystem momentum [S4][S5].
MAJOR DISCUSSION POINT
Startup ecosystem growth
S
Speaker 1
1 argument135 words per minute132 words58 seconds
Argument 1
Integrating AI into DPI can unlock scale benefits but also introduces risks that must be mitigated
EXPLANATION
Speaker 1 frames the upcoming panel discussion by asking how AI integration with digital public infrastructure can generate large‑scale advantages while also highlighting the need to address emerging risks.
EVIDENCE
He poses questions about mitigating new risks as AI becomes embedded in foundational digital systems, the opportunities and risks of AI-DPI integration, and the potential for new products and ecosystems [96-99].
MAJOR DISCUSSION POINT
AI‑DPI opportunities and risks
DISAGREED WITH
Pramod Varma
Agreements
Agreement Points
Integration of AI with digital public infrastructure (DPI) can unlock large‑scale benefits while also creating new risks that must be mitigated
Speakers: Pramod Varma, Speaker 1
Programmability and composability of DPI enable powerful AI applications India’s decade‑long digital investments created programmable, API‑based systems
Pramod stresses that API-driven, programmable DPI provides the technical foundation for powerful AI-driven workflows [45-46][27-33], while Speaker 1 frames the upcoming panel around the opportunities and risks of embedding AI into DPI and asks how those risks can be mitigated [96-99]. Both speakers therefore agree that AI-DPI integration offers significant upside but also requires careful risk management.
POLICY CONTEXT (KNOWLEDGE BASE)
UNDP panelists highlighted that DPI’s population-scale reach amplifies both opportunities and risks, warning that focusing only on efficiency can leave people out and underscoring the need for risk mitigation [S20]; similar concerns were raised about treating AI as shared public infrastructure and ensuring responsible rollout [S36]; Irina Ghose noted that AI failures often stem from poor contextualisation rather than technical limits, pointing to the importance of managing risks [S35].
Digital public infrastructure is an essential foundation for AI diffusion in India
Speakers: Pramod Varma, Speaker 1
India’s decade‑long digital investments created programmable, API‑based systems Programmability and composability of DPI enable powerful AI applications
Pramod outlines how a decade of investment produced API-based services such as Aadhaar, eSign, DigiLocker and UPI that constitute a programmable DPI layer for AI [27-33][45], and Speaker 1 explicitly sets the panel to discuss AI on top of DPI, signalling shared recognition of DPI as the core enabler [94-99].
POLICY CONTEXT (KNOWLEDGE BASE)
The India Stack-Aadhaar, UPI, Ayushman Bharat-provides a universal digital foundation that enables AI applications to reach 1.4 billion people, illustrating DPI’s role in AI diffusion [S37]; DPI is defined globally as platforms for digital ID, payments and data exchange that improve citizens’ lives [S34]; UN policy briefs also stress DPI’s potential to advance Sustainable Development Goals through safe data usage and governance [S32].
Similar Viewpoints
Both speakers view the programmable, API‑driven digital public infrastructure built over the past decade as the critical platform that will allow AI to be scaled across the country, with Pramod highlighting the technical details [27-33][45] and Speaker 1 positioning AI‑DPI integration as the central theme of the session [94-99].
Speakers: Pramod Varma, Speaker 1
India’s decade‑long digital investments created programmable, API‑based systems Programmability and composability of DPI enable powerful AI applications
Pramod points to the DPDP Act as a mechanism that returns data ownership to citizens and small firms [46-48]; Speaker 1’s questions about mitigating new risks implicitly acknowledge the need for strong data‑governance and privacy safeguards when AI is embedded in foundational systems [96-99]. Both therefore recognize data governance as a prerequisite for safe AI‑DPI integration.
Speakers: Pramod Varma, Speaker 1
DPDP Act gives individuals control over their data, keeping it owned by people and small businesses
Unexpected Consensus
Recognition that strong regulatory and privacy frameworks are needed alongside technological rollout
Speakers: Pramod Varma, Speaker 1
DPDP Act gives individuals control over their data, keeping it owned by people and small businesses Prime Minister’s strong support accelerates AI diffusion
While Pramod emphasizes political will and the DPDP privacy law as enablers of AI diffusion [46-48][50], Speaker 1, a moderator rather than a policy advocate, nonetheless foregrounds risk mitigation and regulatory considerations in the panel agenda [96-99]. The convergence of a technocratic champion (Pramod) and a neutral facilitator (Speaker 1) on the necessity of regulation and privacy was not explicitly anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple stakeholders emphasize robust governance and legal safeguards for data privacy, especially in sensitive sectors like health [S25]; high-level dialogues call for strong data-protection laws that coexist with innovation incentives [S24]; a widening gap between emerging technologies and legislative response creates regulatory voids that must be addressed [S26]; calls for immediate policy action across competition, labour and social protection further underline the need for comprehensive regulatory frameworks [S27].
Overall Assessment

The two speakers show a clear consensus that India’s programmable, API‑based digital public infrastructure is the cornerstone for scaling AI, and that while this integration promises substantial economic and societal benefits, it also raises novel risks that must be addressed through robust governance, privacy legislation and risk‑mitigation strategies.

High – both speakers align on the technical foundation (DPI) and the dual nature of AI integration (opportunity vs. risk). This strong agreement underlines a shared vision for leveraging DPI to accelerate AI diffusion while emphasizing the need for regulatory safeguards, suggesting that future policy and innovation efforts are likely to be coordinated around these twin pillars.

Differences
Different Viewpoints
Emphasis on opportunities versus risks of integrating AI into DPI
Speakers: Pramod Varma, Speaker 1
Countries that combine DPI with AI could achieve 10×‑50× higher economic growth Integrating AI into DPI can unlock scale benefits but also introduces risks that must be mitigated
Pramod stresses the massive economic upside of AI-DPI synergy and focuses on the enabling infrastructure and entrepreneurial momentum [49][45][81-82][50], while Speaker 1 frames the upcoming discussion around the need to identify and mitigate new risks that AI integration may bring to foundational digital systems [96-99].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between highlighting AI’s opportunities and acknowledging its risks is evident in the UNDP panel, which warned that measuring success solely by efficiency could exclude vulnerable groups, urging a balanced approach [S20]; other discussions similarly stress the necessity of risk mitigation while pursuing AI-driven DPI benefits [S36].
Unexpected Differences
India’s unique advantage versus a more cautious, global perspective
Speakers: Pramod Varma, Speaker 1
India’s political will, regulatory push, and ready infrastructure position it to reap these gains Integrating AI into DPI can unlock scale benefits but also introduces risks that must be mitigated
Pramod asserts a singular, India-specific advantage based on political will and infrastructure readiness [50], while Speaker 1’s neutral questioning about risks suggests a broader, less country-specific view, an unexpected tension between a strong national narrative and a cautious, universal framing [96-99].
POLICY CONTEXT (KNOWLEDGE BASE)
Keynotes from Indian leaders portray the country’s fewer constraints as a strategic advantage, positioning India ahead in AI adoption and suggesting urgency for leveraging this lead [S28][S31]; however, international dialogues stress the importance of strong data-protection laws and coordinated governance to avoid over-optimism and ensure responsible deployment [S24][S26].
Overall Assessment

The discussion shows limited direct conflict; the main divergence lies in Pramod’s optimistic, opportunity‑focused narrative versus Speaker 1’s emphasis on risk identification and mitigation for AI‑DPI integration.

Low to moderate disagreement – primarily about emphasis rather than outright opposition. This suggests that while stakeholders share a common goal of leveraging AI with digital public infrastructure, further dialogue will be needed to align on governance, risk management, and implementation strategies.

Partial Agreements
Both speakers agree that AI should be layered on top of digital public infrastructure to generate large‑scale benefits, but they differ on the emphasis: Pramod highlights the technical enablement and economic potential, whereas Speaker 1 stresses the necessity of risk mitigation and governance [45][96-99].
Speakers: Pramod Varma, Speaker 1
Programmability and composability of DPI enable powerful AI applications Integrating AI into DPI can unlock scale benefits but also introduces risks that must be mitigated
Takeaways
Key takeaways
India is actively democratizing AI, moving it beyond elite circles to include students, entrepreneurs, and the broader public. Strong political support, particularly from the Prime Minister, is accelerating AI diffusion in the country. A decade of investment in Digital Public Infrastructure (DPI) has created a programmable, API‑based ecosystem (Aadhaar, eSign, DigiLocker, UPI, GST, FastTag, etc.) that generates verifiable, cryptographically signed data trails. The DPDP Act empowers individuals and small businesses with control over their data, reinforcing data ownership and privacy. Programmability and composability of DPI provide a powerful foundation for AI applications, enabling large‑scale, data‑driven innovation. Combining DPI with AI is projected to deliver 10×–50× higher economic growth for countries that successfully integrate them. India’s political will, regulatory environment, and ready infrastructure position it to reap these economic gains. A vibrant entrepreneurial ecosystem, with startup numbers growing from ~1,000 in 2016 to 100,000 today and a target of 1 million by 2035, is poised to leverage DPI and AI to solve the nation’s myriad problems.
Resolutions and action items
None identified
Unresolved issues
Specific strategies for mitigating new risks that arise when AI is embedded in foundational digital systems. Detailed frameworks for ensuring AI safety, fairness, and accountability within DPI. Concrete steps for integrating AI capabilities into existing DPI APIs and services. Mechanisms for balancing rapid AI diffusion with privacy and security concerns under the DPDP Act.
Suggested compromises
None identified
Thought Provoking Comments
Compared to the last event in Paris, which was attended by an elite, exclusive crowd, this year we see students, children, and young entrepreneurs – a true democratization of AI.
Highlights a shift from exclusivity to mass participation, framing AI diffusion as a societal movement rather than a niche activity.
Sets a positive, inclusive tone for the keynote and reframes the conversation from a technical showcase to a discussion about broad societal impact, prompting the audience to consider accessibility as a core metric for AI success.
Speaker: Pramod Varma
India’s advantage comes from a decade of digital investment that brought a billion people from ‘invisible to the system’ through Aadhaar, eSign, DigiLocker, UPI, and other API‑based, programmable public infrastructure.
Connects concrete digital public infrastructure (DPI) to AI readiness, arguing that programmable, verifiable data pipelines are the foundation for scalable AI applications.
Introduces the central thesis that DPI is the enabling layer for AI, steering the discussion toward the interplay between infrastructure and AI, and laying groundwork for the panel’s focus on integration challenges.
Speaker: Pramod Varma
Billions of invoices generated through GST are machine‑readable, cryptographically protected, and digitally signed – essentially a goldmine of data for AI.
Identifies a massive, high‑quality data source that many countries lack, emphasizing the strategic value of existing transactional data for training AI models.
Adds a concrete example of data assets, deepening the analysis of why India can leapfrog in AI and prompting considerations of data governance and utilization in the upcoming panel.
Speaker: Pramod Varma
AI’s two biggest ingredients are programmability and composability; combined with India’s verifiable data trail, countries that invest in DPI and layer AI on top could be 10x‑50x more economically successful than those that don’t.
Makes a bold, quantifiable prediction linking DPI‑enabled AI to macro‑economic outcomes, challenging listeners to think about long‑term strategic impact.
Serves as a turning point by moving from descriptive to prescriptive, encouraging the audience and panelists to contemplate policy, investment, and competitive dynamics at a national level.
Speaker: Pramod Varma
The DPDP Act gives individuals and small businesses ownership and control over their data, ensuring that data belongs to the people.
Highlights a regulatory innovation that aligns data sovereignty with AI development, addressing privacy concerns while enabling data‑driven innovation.
Introduces a nuanced perspective on risk mitigation and ethical AI, setting up a potential discussion on how privacy legislation can coexist with AI scaling.
Speaker: Pramod Varma
From 1,000 companies in 2016 to 100,000 startups today, and a projection of 1 million startups by 2035 – the sheer scale of entrepreneurial attempts matters, even if not all succeed.
Emphasizes the role of mass entrepreneurship as a catalyst for solving India’s myriad problems, framing failure as an acceptable part of the innovation ecosystem.
Broadens the conversation from infrastructure to human capital, reinforcing the idea that DPI and AI must be leveraged by a vibrant startup ecosystem, and setting expectations for the panel’s focus on entrepreneurship.
Speaker: Pramod Varma
Overall Assessment

Pramod Varma’s keynote strategically reframed the AI conversation from a narrow focus on large language models to a holistic view where India’s digital public infrastructure, data ownership laws, and massive entrepreneurial drive form a unique ecosystem for AI diffusion. His remarks about democratization, programmable DPI, abundant high‑quality data, and the DPDP Act introduced new dimensions—accessibility, technical readiness, and ethical governance—that shifted the panel’s anticipated focus toward integration challenges and economic implications. The bold economic prediction and the scaling of startups acted as turning points, moving the dialogue from descriptive achievements to forward‑looking policy and investment strategies, thereby setting a rich, multi‑faceted agenda for the subsequent discussion.

Follow-up Questions
How can DPI architecture mitigate new risks and emerge as AI becomes embedded in foundational digital systems?
Critical for ensuring that the integration of AI does not compromise the security, reliability, or trustworthiness of core public digital infrastructure.
Speaker: Speaker 1
What are the opportunities and risks that emerge as a result of integrating AI into DPI?
Identifies both potential benefits (e.g., efficiency, new services) and challenges (e.g., bias, privacy breaches) that need to be evaluated before large‑scale deployment.
Speaker: Speaker 1
Could integrating AI into DPI enable the development of new products, services and market ecosystems?
Explores the economic and innovation potential of AI‑enhanced public infrastructure, guiding policy and investment decisions.
Speaker: Speaker 1
Do countries that invest in digital public infrastructure (DPI) and combine it with AI achieve 10‑50× better economic growth than those without such infrastructure?
A bold claim that requires empirical validation to inform national strategies on DPI and AI investment.
Speaker: Pramod Varma
Will India reach one million startups by 2035, and what factors will determine their success?
Understanding the scalability of the startup ecosystem is essential for planning support mechanisms, funding, and talent development.
Speaker: Pramod Varma
How can the massive repository of machine‑readable, cryptographically signed invoices and other transaction data be leveraged for AI model training while preserving privacy?
This data is a potential goldmine for AI, but its use raises technical, ethical, and regulatory challenges that need systematic study.
Speaker: Pramod Varma
What are the implications of the DPDP Act (India’s privacy bill) on data ownership, sharing, and AI innovation?
Assessing how privacy legislation interacts with AI development is crucial for balancing individual rights with societal benefits.
Speaker: Pramod Varma
How can young, adventurous Indian entrepreneurs effectively apply AI on top of DPI to solve sector‑specific problems such as energy, agriculture, and logistics?
Targeted research can identify best practices, required skill sets, and supportive policies to translate AI‑DPI synergy into tangible solutions for critical sectors.
Speaker: Pramod Varma

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

MahaAI Building Safe Secure & Smart Governance

MahaAI Building Safe Secure & Smart Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened with a framing that artificial intelligence is already reshaping governance, markets and geopolitics and that the challenge is to ensure governance shapes AI rather than being overwhelmed by it [1-5][6-13]. Speakers emphasized that AI’s speed, opacity, concentration and dual-use nature create a paradox where overly slow regulation risks harm while excessive control stifles innovation, calling for intelligent, human-centered, adaptive policies and global cooperation [8-13][14-18].


Maharashtra’s Minister Ashish Shelar presented the state’s “Maha AI” initiative as a living laboratory, highlighting AI-powered crime-prevention tools, a cloud-native intelligent infrastructure and five pillars-compute, data, governance, standards and capacity building-to deliver safe, secure and smart governance [46-53][57-60]. He also described concrete projects such as the Mahak Crime OS with Microsoft, AI-driven property mapping, real-time urban dashboards and flood-management pilots that aim to make services faster, more transparent and inclusive [47-53].


Praveen Pardeshi stressed the need for green energy to power AI, large-scale capacity-building through AI courses, and the creation of a “Maha GPT” system to help officials and citizens navigate complex government orders, while warning about monetising public data without safeguards [80-88][94-114]. Yashasvi Yadav outlined Maharashtra’s cyber-security project that integrates AI tools for dark-web monitoring, threat analysis and a 1930 helpline, reporting that within six months it froze ₹1,000 crore of fraud and saved 70 lives, and he warned that quantum computing could soon undermine current encryption [119-136][139-149].


Suresh Sethi highlighted how the state’s population-scale digital public infrastructure enables dynamic eligibility, predictive welfare delivery and the need for explainable, auditable AI with human redress mechanisms to avoid inclusion and exclusion errors [158-176][184-190]. Ranjit Goswami added that AI should be pursued holistically to promote welfare and happiness, calling for interoperable databases such as Aadhaar across departments and for large-tech partners to embed intelligence into core platforms [200-207][210-217]. Beena Sarkar raised ethical concerns about gender bias and unsafe hardware, urging the establishment of a dedicated India Safety Institute to evaluate new AI-enabled devices for societal impact before deployment [221-229][233-250].


Amit Kapoor argued that Maharashtra’s potential is limited by skill gaps-only about 20 % of the workforce is at advanced skill levels-and by inadequate internet and data-center infrastructure, recommending rapid investment to enable AI benefits in nutrition, education and health for tier-2 and tier-3 cities [266-276][280-287][292-300]. He also warned that without proper education and ethical framing, AI could become a “dumping-ground” that degrades society, emphasizing the need for affordable services and upskilling to avoid underemployment [301-307][311-317]. Across the discussion, participants agreed that AI governance must be adaptive, transparent, and human-centric, supported by robust data, capacity building and international standards to harness AI’s promise while mitigating risks [12-13][14-18][32].


Keypoints

Major discussion points


Intelligent, adaptive AI governance is essential – AI is already reshaping governance, markets, and geopolitics, creating a paradox where slow regulation risks harm and heavy regulation risks stagnation; the solution is “intelligent governance” that is human-centred, transparent, risk-based, globally coordinated, and adaptable to AI’s rapid evolution. [1-13][14-18][20-28]


Maharashtra is positioning itself as a “living laboratory” for AI-enabled public services – The state has deployed AI-powered crime-fighting tools (Mahak Crime OS), built a cloud-native “intelligent government” platform (Mahaiti) that powers smart recruitment, property mapping, real-time urban dashboards, flood management, and smart mobility, and emphasizes safe, secure, smart governance built on five pillars (compute, data, governance, standards, capacity). [41-48][57-62]


Capacity building, data monetisation, and AI-driven decision tools are being rolled out – Initiatives include massive green-energy investments to power AI, AI training for government staff, a state data authority to capture and commercialise health and other public data, and the development of “Maha GPT,” a small-language-model interface that lets officials and citizens query complex government orders and regulations. [80-88][95-114]


Ethical, inclusive, and explainable AI must be embedded with human oversight – Panelists stressed that AI systems should be auditable, explainable, and backed by a human redress pathway; dynamic eligibility and predictive governance rely on verifiable digital credentials, while gender-bias and safety concerns around emerging devices (e.g., smart glasses) require dedicated ethical review bodies such as the India Safety Institute. [158-186][187-191][221-257]


Scaling AI benefits faces concrete challenges – Significant skill gaps (only ~20 % of the workforce at higher skill levels), inadequate broadband (average 58 Mbps in Mumbai), limited data-center infrastructure, and socio-economic issues (malnutrition, under-employment) threaten equitable AI adoption, especially in Tier-2/3 cities; addressing these requires rapid infrastructure upgrades, affordable services, and focused education. [266-306][307-317]


Overall purpose / goal of the discussion


The session aimed to articulate a vision for responsible AI governance, showcase Maharashtra’s pioneering AI initiatives, examine practical tools and policy mechanisms for integrating AI into public administration, and identify the ethical, technical, and socio-economic hurdles that must be overcome to ensure AI delivers inclusive, safe, and sustainable public value.


Overall tone and its evolution


– The opening remarks set a formal, forward-looking and optimistic tone, emphasizing opportunity and the need for wise governance.


– As panelists presented Maharashtra’s projects, the tone became pragmatic and demonstrative, highlighting concrete successes and technical details.


– When discussing cyber-security, quantum threats, and ethical concerns, the tone shifted to cautious and warning-laden, underscoring risks and the need for safeguards.


– The final contributions focused on critical, problem-solving language, stressing skill deficits and infrastructure gaps, before concluding with a constructive call-to-action to collaborate on responsible AI deployment.


Overall, the conversation remained constructive but moved from enthusiastic vision-casting to a balanced mix of achievement showcase, risk awareness, and urgent policy recommendations.


Speakers

Mr. Virendra Singh


– Role/Title:


– Area of Expertise:


Mr. Suresh Sethi


– Role/Title: Managing Director and CEO, Protean EGov Technologies [S3]


– Area of Expertise: Digital Public Infrastructure (DPI), AI-enabled governance


Mr. Ashish Shelar


– Role/Title: Honorable Minister of IT and Cultural Affairs, Government of Maharashtra [S5]


– Area of Expertise: Technology-driven governance, public-sector AI deployment


Moderator


– Role/Title: Conference Moderator [S7][S9]


– Area of Expertise: Session moderation


Mr. Yashasvi Yadav


– Role/Title: Additional Director General of Police, Maharashtra Cyber Department, Government of Maharashtra [S10]


– Area of Expertise: Cyber security, law-enforcement AI applications


Dr. Amit Kapoor


– Role/Title: Chair, Institute for Competitiveness [S12]


– Area of Expertise: Economic policy, workforce development, AI impact on growth


Mr. Ranjeet Goswami


– Role/Title: Head, Corporate Affairs, Tata Consultancy Services [S13]


– Area of Expertise: Corporate-sector AI solutions, public-private partnership in governance


Mr. Praveen Pardeshi


– Role/Title:


– Area of Expertise:


Ms. Beena Sarkar


– Role/Title: Customer Success Executive, ServiceNow; Volunteer, Women for Ethical AI South Asia (UNESCO) [S16]


– Area of Expertise: Ethical AI, gender-focused AI governance


Mr. Devroop Dhar


– Role/Title: Co-Founder and CEO, Primus Partners (Panel Moderator) [S18]


– Area of Expertise: Business strategy, AI consultancy


Additional speakers:


Chief Minister Devendra Fadnavis – Chief Minister of Maharashtra (mentioned as the visionary leader behind Maharashtra’s AI initiatives).


Satya Nadella – CEO, Microsoft (referenced as presenting the AI-powered Mahak Crime OS).


Dr. Ganesh Ramakrishnan – (referred to as “Professor Ganesh” from IIT Mumbai, collaborating on AI solutions for government orders).


Vikash Chandra Rastogi – (listed among panelists, affiliation not specified).


Rajesh Agarwal – (listed among panelists, affiliation not specified).


Dev Rukhdar – (listed among panelists, affiliation not specified).


Davinder Sandhu – (listed among panelists, affiliation not specified).


Full session reportComprehensive analysis and detailed insights

The session opened with Mr Virendra Singh reminding the audience that artificial intelligence is already reshaping governance, markets and even geopolitics, and that the crucial question is no longer whether AI will influence governance but how governance will shape AI [1-5]. He described a “governance paradox” in which overly slow regulation risks harm while overly heavy control risks stagnation, and argued that the answer lies in “intelligent governance” built on human-centred design, transparency, accountability, risk-based and adaptive regulations, and global cooperation [6-13][14-18]. Singh concluded that history will judge us by the wisdom of our governance rather than the sophistication of our algorithms [19-21][24-28].


In response, Mr Ashish Shelar presented Maharashtra’s “Maha AI” programme as a living laboratory for AI-enabled public services [41-48]. He highlighted the AI-powered Mahak Crime OS, showcased by Microsoft’s Satya Nadella, which has accelerated crime prevention, detection and investigation [47]. Shelar also described the state’s cloud-native “intelligent government” platform, Mahaiti, a modular API-driven backbone that integrates services, predicts needs and reacts in real time, supporting smart recruitment, AI-based property mapping, real-time urban dashboards for traffic, weather and civic issues, as well as pilots in flood management and smart mobility [48-53]. The programme rests on five pillars – compute and cloud at scale, high-quality public data sets, state AI governance, interoperable standards and capacity building – which together constitute a safe, secure and smart governance stack [57-60]. The AI Impact Summit 2026, the first in the global series to be hosted in the Global South, brought together more than 20 heads of state, 60 ministers and hundreds of AI leaders [41-48]. Shelar also flagged “internet health” as a core policy priority, warning that disinformation, deep-fakes and AI-generated fraud can undermine democracy and markets. He outlined a three-pronged response: strengthened cyber-security, widespread digital-literacy programmes, and a hybrid verification ecosystem [57-60]. He stressed that smart governance is not only about deploying chatbots or dashboards; it requires resilient, auditable, human-centred AI embedded in the “nerve systems” of transport, energy, public safety, urban planning and welfare delivery [57-60].


Mr Praveen Pardeshi then turned to the foundations required to sustain such a stack. He stressed the need for green energy, noting that more than 19 000 MW of solar capacity is being added to power future AI workloads [80-84]. He outlined a large-scale capacity-building effort that includes an AI university for civil servants and online courses on the IGOT platform [85-86]. Pardeshi announced the creation of a State Data Authority to capture and commercialise public data – for example health records that are valuable to pharmaceutical companies – while ensuring that any commercial benefits remain with India [98-102]. A concrete output of this data strategy is “Maha GPT”, a small-language-model interface that can disentangle over 150 000 government orders, allowing officials and citizens to query the latest positions on permits, court rulings and other regulations [95-114].


Turning to security, Mr Yashasvi Yadav described the Maharashtra Cyber Security Project, which brings together state-of-the-art AI tools, experienced consultants and police officers under one roof, supported by a 1930 helpline that provides instant assistance for cyber-crime, dark-web monitoring, ransomware, sextortion and bullying [119-129][130-136]. Within six months the project froze more than ₹1 000 crore of fraud and rescued 70 young women from cyber-bullying-related suicide attempts [131-136]. Yadav also recounted the “Echoes of Pahalgam” incident, where AI-driven threat-intelligence platforms (Luminar, Cognite, Pathfinder) thwarted nation-state cyber-attacks launched during a conventional war [139-141][142-144]. He warned that quantum computing – capable of breaking RSA, blockchain and banking encryptions within seconds – poses an imminent risk, especially given India’s modest investment compared with China’s [145-150][151-158].


From a data-infrastructure perspective, Mr Suresh Sethi explained how Maharashtra’s population-scale Digital Public Infrastructure (DPI) provides a unique advantage for embedding intelligence. He illustrated the move from static identity to dynamic eligibility through machine-readable verifiable credentials, enabling AI to determine subsidy eligibility in real time [166-174]. Sethi highlighted predictive governance, where AI can anticipate income distress and trigger benefits automatically [175-177]. He warned of inclusion and exclusion errors – leakages and denied entitlements – and called for AI systems that are explainable, auditable and equipped with a human redress pathway [184-190][191-193].


Mr Ranjit Goswami broadened the perspective to the role of large-tech partners. He argued that AI should be pursued holistically to deliver “welfare for all, happiness for all”, echoing Tata’s founding principle that business exists to serve society [200-206]. Goswami emphasized the need for a common citizen database – ideally linking Aadhaar across all state departments – to avoid siloed data and enable every department to see the citizen as a whole rather than as a departmental client [210-217]. He urged large-tech partners to embed intelligence into core government platforms, sharing standards and interoperable data to achieve coordinated smart governance [200-207].


The ethical dimension was foregrounded by Ms Beena Sarkar, who represents the Women for Ethical AI South Asia chapter. She warned that new AI-enabled hardware, exemplified by smart glasses, can threaten privacy and safety, particularly for women, and urged the establishment of an India Safety Institute to vet emerging devices against gender-bias and societal risk before market entry [221-229][233-250][257-260]. Sarkar’s metaphor of the “Kali versus the Rakta Biji” effect – a warning not to create technologies that become weapons against half the population – underscored the need for rigorous assessment [257-260].


Addressing the broader socio-economic context, Dr Amit Kapoor highlighted stark skill gaps – only about 20 % of Maharashtra’s 9 crore-strong workforce possesses advanced (level 3-4) skills, while the remaining 80 % are at basic levels [270-274]. He pointed out that average broadband speed in Mumbai is only 58 Mbps, limiting AI’s reach in Tier-2 and Tier-3 cities [279-283]. Kapoor called for rapid investment in data-centre capacity, affordable connectivity and large-scale upskilling, noting that the state already hosts 16 % of India’s IT talent in Pune [285-288]. He argued that AI could be leveraged to monitor nutrition, water and sanitation at the PIN-code level, but warned that without proper education AI could become a “dumping ground” that degrades cognition through doom-scrolling and AI-generated content [292-300][301-307][311-317].


Across the panel, participants converged on several core agreements: the necessity of an intelligent, human-centred governance framework that is transparent, accountable and risk-based; the importance of capacity building and skill development; the promise of AI to improve public service delivery, law enforcement and predictive welfare; and the imperative of ethical safeguards, explainability and global cooperation [12-13][14-18][57-58][184-190][221-229]. The panel closed by urging policymakers, technologists and citizens to govern intelligence with wisdom, ensuring AI systems are safe, secure, equitable and aligned with core human values [32-33].


Session transcriptComplete transcript of the session
Mr. Virendra Singh

Artificial intelligence is real and it is influencing governance, markets, public services and even geopolitics. The question before us is not whether AI will shape governance. The question is whether governance is going to shape the artificial intelligence. It’s transforming governance in fundamental ways today. Through decision intelligence, public service delivery at scale and national security and strategic stability. Moreover, the governance challenge becomes uniquely complex as AI introduces speed, opacity to a level, concentration, global reach and dual use. This creates a governance paradox. Regulate too slowly and risk harm. Regulate too slowly and risk harm. Regulate too heavily and risk stagnation. The answer is not control versus innovation. The answer is intelligent governance. Therefore, the principle of AI governance should necessarily include human -centered design, transparency and accountability, risk -based regulations, global cooperation, and adaptive policies.

AI does not recognize borders. We need interoperable frameworks, shared safety standards, and cooperative oversight mechanisms. Governance frameworks must evolve as the artificial intelligence evolves. Static policies cannot manage dynamic intelligence. In this era, we move from individual, national -level policies to coordinated global norms, and that’s the necessity for today. History will not judge us by our sophisticated algorithms. It will judge us by the wisdom of our governance. The industrial revolution reshaped economies. The digital revolution reshaped communication. The AI revolution will reshape decision -making itself. With that power comes great responsibility. The digital revolution which we are undergoing today, surely we stand at crossroads. One path leads to inequity, instability and uncontrolled disruption. The other leads to augmented human capability, smart governance and inclusive prosperity.

The difference between these futures will not be determined by machines. It will be determined by us, the policy makers, the people who use it and each and every person who is involved in the process of AI governance. Therefore, we need to commit today to ourselves for building AI systems that are safe, secure, transparent, equitable and sustainable. and aligned with core human values. And that’s what we are going to discuss here today. Last but not the least, the message that this panel discussion and the fireside chat is going to give is let us govern intelligence with wisdom. Thank you so much.

Moderator

Thank you, sir. We are truly honored to have with us today a leader who has been forefront of technology -driven governance in Maharashtra. I now request Shri Ashish Shailar Sir, Honorable Minister of IT and Cultural Affairs, Government of Maharashtra, to grace us with a keynote address.

Mr. Ashish Shelar

Good morning to everybody. Respected guests, dignitaries, Excellencies and all the policy makers, members of the media, dear friends, young challengers, ladies and gentlemen. Namaskar, Vande Mataram and a very good morning to all. India today is not merely hosting an AI summit. India is helping to ride the operating system of AI age. We meet at Bharat Mandapam under the banner Maha AI, building safe, secure and smart governance. As a part of AI Impact Summit 2026, the first in its global series to be hosted in the global south. Over 20 heads of state, 60 ministers and hundreds of AI leaders from the industry and academia are here. Reflecting a shared vision. A shared conviction that AI must be inclusive, responsible and resilient.

I hereby say under the leadership of Chief Minister Devendra Fadanvisi, Maharashtra has positioned itself as a living laboratory for AI in governance. Our partnership with global technology leaders, for example, our AI -powered Mahak Crime OS, showcased by Microsoft’s Satya Nadellaji, has already transformed how we prevent, detect and investigate crime, faster response, shorter investigation cycles and more transparent processes. Simultaneously, our state digital agency, Mahaiti, is building what we call an intelligent government infrastructure, a cloud native, and a cloud -based infrastructure. modular, API -driven backbone that uses AI to integrate services, predict needs, and respond in real -time. This spans smart recruitment, AI -based property mapping for urban local bodies, real -time urban dashboard for traffic, weather, and civic issues, and pilots in flood management and smart mobility.

The philosophy is simple. Use AI not to distance the state from the citizens, but to make governance more human, faster, more responsive, and more inclusive. In other words, scale empathy through insight. Across the world, public sectors are resting with the same three imperatives. So better safeguard digital sovereignty. and adopt AI responsibly. Many countries have realized that interportal, public, date and robust AI governance are becoming strategic infrastructure at par with energy, transport or telecom. For Maharashtra, Maha AI is our response to this challenge. A safe, secure, smart governance stack must rest on five pillars. Computers, compute and cloud at scale number one, high quality public data sets number two, state AI governance number three, and interpolarity and standards number four, and capacity building is number five.

Smart governance is not only about deploying chatbots or dashboards. It is about building resilient, auditable, human -centered, AI systems. into the nerve systems of cities and states, transport, energy, public safety, urban planning, disaster response and welfare delivery, without trustworthy air governance, smart cities’ risk opacity, bias, security breaches and erosion of public trust. In Maharashtra, we see internet health as a core policy concern, just as physical health is essential for individuals, digital health is essential for societies. Disinformation, deepfakes, AI -generated fraud and cyberattacks can undermine democracies, markets and communities with unprecedented speed. Our response must be combined. That is robust cyber security, digital literacy and critical thinking, hybrid verification ecosystem. and that is our response as far as our state is concerned.

I am really happy to be the part of this summit and at the same time giving a response addressing the challenges and our ecosystem under the name of Mahayaya and therefore we are here to present our case as a building safe, secure, smart governance and appeal all the best of the technologies and platforms of the world to associate, coexist and work with us. Thank you so much. Thank you so much.

Moderator

Thank you, sir. Your vision for a digitally empowered Maharashtra truly sets the tone for everything we will discuss today. And now the highlight of today’s session. May I request all the panelists to join us on the stage please Sri Praveen Pardeshi Sir Yashasvi Adav Sir Dr. Anupam Chattopadhyay Dr. Amit Kapoor Mr. Suresh Sethi Major Ranjit Goswami Ms. Bina Sarkar Mr. Dev Rukhdar Davinder Sandhu Dr. Ganesh Ramakrishnan Vikash Chandra Rastogi Sir Rajesh Agarwal Sir Mr. Suresh Sethi Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Shri Yashastri Yadav, Additional Director General of Police, Maharashtra Cyber Department, Government of Maharashtra, Dr. Anupam Chattopadhyay, Associate Professor, Nanyang Technological University, Singapore, Dr. Amit Kapoor, Chair, Institute for Competitiveness, Mr.

Suresh Sethi, Managing Director and CEO, Protean EGov Technologies, Major Ranjit Goswami, Head, Corporate Affairs, Tata Consultancy Services, Ms. Beena Sarkar, Customer Success Executive Service Now, and moderating this conversation is Mr. Devaroop Dhar, Co -Founder and CEO, Primus Partners. I now hand over to Mr. Devaroop Dhar to moderate this session.

Mr. Devroop Dhar

Thank you, Aditi, and a warm welcome to all our panel members. I’ll start with Praveen Paradesi, sir. So, sir, at Mitra, you have been experimenting a lot with AI. There are multiple AI initiatives which have been taken. If you could share your thoughts, your vision as we start this.

Mr. Praveen Pardeshi

So first, the most important thing about AI is getting our energy, some of the hard things right. So one is energy because, you know, remember President Trump just mentioned that why should we be paying for Indians who are processing most of our answers. So I think we are pushing on green energy, more than 19 ,000 megawatts of that to come up at solar level. So that’s the fuel for AI in future. The second is that capacity building. So all our staff from Mitra and a lot of our other departments, we went to this AI university and gave them a course. And we also hope that there will be online courses available on IGOT where government staff can become empowered to use AI.

Then we look at what is the impact on the economy. So the impact on the economy, mostly people are concerned about the jobs. And that’s a real issue. One analysis from NITI. Shows that from 1950. to 2020, all highly educated people with postgraduate degrees, engineers, they are the ones who had a 95 % plus chance of getting jobs. But from 2020 till now, 0 .65 % is the rate at which physical jobs, that is, masons, bricklayers, home carers, their value and their employability is increasing vis -a -vis highly educated ones. So this is the impact of AI. So how should we do this better? One is, of course, as our Secretary IT mentioned, making it available in government, ensuring seamless access to services.

But other aspects which we don’t look at, which we are working through our state data authority, is how do we also encash the data at a large scale? Because otherwise we become… a sitting target for people to just use India’s data for monetizing their own values. So two big examples are pharmaceuticals. India has the largest population and the number of diseases, experiments, et cetera. Now, this is all health data, and this is very valuable for pharma companies. So the state data authority is working on issues wherein we can make it a single source of proof, make it available also, and if there is a commercial possibility, make those resources available to India, that is to our government, and not to be cashed for free.

So these are some of the applications. Government issues many, many orders. We have issued more than 150 ,000 orders, which are called government GRs. And it’s a maze through which it’s very difficult even for government officers, whose departments it is, to understand what is the latest position on a complicated situation. So we are working with Professor Ganesh here from IIT Mumbai. So Mitra and we are working together with them. And we are working with the government to make sure that we are getting the best results. And we are working with the government to make sure that we are getting the best results. disentangle all these orders through a small language model, not a large language model, and so that you can query at two levels.

One, for the government officers. The government officers should be able to ask in any complicated situation, what is the latest position on whether an additional FSIF or a building permit can be given in this situation or not, what are the Supreme Court orders. And on the other hand, citizens should also be able to ask under those rules. So this is called Maha GPT, and hopefully this will be the first application, which is both available to government officers and to citizens. I stop here.

Mr. Devroop Dhar

Thank you, sir, for sharing all these examples, wonderful examples. I’ll come to Yashishvi Yadav, sir. Cyber is another large user of AI. So if you could share your examples, your experience around how cyber is using AI.

Mr. Yashasvi Yadav

Okay, so law enforcement and cyber security. is one of the major concerns for law enforcement agencies all over the world. And I would like to draw your attention that under the visionary leadership of our Chief Minister, Mr. Devinder Fadnavis, who had the foresight to use AI in law enforcement work about five years ago. So we have launched and implemented this Maharashtra Cyber Security Project, which generously borrows AI tools, technologies, algorithms. And we are fighting real crime with these technologies. And the USP of this project is that the best and the most state -of -the -art tools, technologies, experienced consultants in the field of cyber security, and experienced and professional police officers have all coalesced, under one roof.

And there is a live police station as well. So this is how cyber security is being embellished through this project. And the beauty is that the dark web monitoring, threat analysis, social media monitoring, or any type of cyber crime, sextortion, ransomware, cyber bullying, and a lot of other types which cyber crime takes. They can all be undertaken, handled by just one, on the fingertips by just one helpline number, 1930. So this agency, if any kind of cyber issue is with any citizen, they can just dial this 1930 number and all the cyber solutions will be provided by more than 150 cyber consultants. So a lot of AI tools are being used. And this is the reason why we are using this and seamlessly.

And the best part of this whole exercise is that in less than six months, more than 1000 crore of rupees, which would have gone into the hand of the scamsters have been frozen and are being ultimately returned to the victims. What a big relief to the victims. And more than 70 young girls who were being subjected to intense cyberbullying, blackmailing and sextortion and were on the verge of committing suicide because of very efficient AI tracking. They were prevented from taking the extreme step. And 70 lives have been saved in less than six months of its operation. So that’s how AI is at the forefront. Thank you. Is at the forefront of being the bulwark against cyber security concerns.

I would like to draw your attention to only one report that we generated. which is called Echoes of Pahalgam. Now, in that case, while the Indian Army was fighting a conventional war with Pakistan because of the Pahalgam incident, more than one million cyber attacks were launched by nation -state actors, whom we called as APT, groups from Indonesia, Pakistan, and even Turkey, and so many other countries. And they were thwarted by such AI tools, which we call as threat intelligence tools, like Luminar, Cognite, or Pathfinder, which are big data analytical tools. So in the dark net, we still find the traces of these cyber attacks. So cyber crime is now slowly progressing into cyber terrorism and cyber warfare.

So that is what we have to be very, very careful. and before I end this preliminary address, I would like to also draw your attention what beyond AI? A big, big threat is lurking. It is called quantum computing. Now quantum computing can do processes in qubits, hundreds of millions of qubits. It’s at speed and it can solve complex issues in less than six seconds which the best of supercomputers would take more than 50 years to do it immediately. So quantum computing can break the best of encryptions including RSA encryptions of the banking industry, including blockchain technology. Now if these encryptions are broken in less than few minutes, the whole financial system can be lopsided and lots of money can flow into actors or threat actors which we are not aware at all.

Bitcoin can be broken Even credit card Encryptions can be broken Banking systems, encryptions can be broken So right now we have to prepare What quantum computing Can give us as in terms of Pros and what can be The shortcomings or dangers Lurking because of quantum Computing, the cons So we have to prepare because China And other countries have already invested 15 billion dollars Or even close to 20 billion dollars We have invested only 1 billion Dollar till now So that’s how we have to catch up with Quantum computing before it’s too late So this is how cyber security And law enforcement perspective On AI And I would like to pass on the baton To the next speaker, thank you

Mr. Devroop Dhar

Thank you sir, that was quite reassuring as well And since you spoke about quantum I want to bring in Dr. Anupam Chattopadhyay. Anupam, you’re working at the intersection of quantum and AI. So if you could share your thoughts, how are things moving in that direction? pooled into the product in order to help this. Thanks, Anupam, for giving a perspective, both from research as well as industry. So I’ll come to Suresh, you. So you’re working extensively in DPIs, and there’s a strong interaction between DPIs and AI. So from your experience, how are things moving in the DPI space, and how is AI influencing?

Mr. Suresh Sethi

Thanks, Devroop. I think from a DPI perspective, we are all very familiar. We today have population -scale digital rates. And, you know, that puts us in a very sweet spot because today a lot of times when we start using or embedding AI into any technology, are we ready to embed AI or not? And I think there was reference to data sets, how is data organized, how can you enable AI on top of it? So I think, first of all, the population -scale DPI that we have, that gives us a significant advantage. Whether we talk about identity, we talk about our UPI rails, which is the payment and the transactional layer which comes on top of it.

And similarly, when we look at data itself, DigiLocker today has millions of authenticated documents that come into play over there. So while we have the digital infrastructure in place, if we can embed an intelligence layer on top of it, and if the question is around targeting subsidy, getting the right beneficiary, putting the money in the hands of the right person, that becomes a very, very important and significant leverage. I will just take two or three examples where we see AI playing a significant role. One is that as we move from static identity to dynamic eligibility. Now, we’ve seen it all happen. Today, we have digitally verifiable credentials. So the moment you are using static identity, you are only able to prove who you are, what do you do.

and then you are applying for getting some sort of benefits or subsidy coming through to you. But if you have verifiable credentials, these are credentials which are machine readable. We talk about the concept of blue dot, which technically means all of us have certain attributes which are associated with us. If these are available in a machine readable format, then AI can actually determine who is eligible for what subsidy. The second part comes to the fact saying, are we being reactive or can we do predictive governance? And predictive governance can be strongly enabled by AI because the moment you are having credentials which are digitally verifiable, you are actually able to predict who needs some sort of subsidy.

Now today, if there’s a distress in income and that can be tracked, you can trigger some sort of benefit to that and coming at a government level. If you can put data. consented data being shared with the government, the same can come through. And last but not least is the important part related to inclusion error and exclusion error. So when we talk about inclusion error, we are talking about leakages. When we are talking about exclusion, naturally we are saying the right person is not getting what is due to them. So your ability to be able to predict precision using verification is going to be very critical. Again, an AI layer can be embedded over there.

But all this very clearly, and we’ve heard it before, all this is very clearly important to have the guardrails around it. So AI should be explainable. The moment we are saying explainable, today a decision taken not to give benefits to somebody should be very clearly explained. And similarly, if there are benefits going out, that should also be explainable. The second part is auditable. So whatever we are doing, there has to be an audit layer over there to explain what has happened. And more importantly, there should be a human redressal pathway because ultimately you can’t put everything to machines. You have to have that human person coming into play and having accountability settled over there. So I think these are critical aspects which can make governance more predictive, more precise, and more proactive going forward by embedding an intelligence layer into the DBI.

Mr. Devroop Dhar

Thanks, Suresh. I think very valid and meaningful points. I’ll come to Ranjit. Now, we’re talking about AI. There are a lot of happenings which are there. We have seen, we have heard about so many things at the summit. Now, a major tech company like TCS, which is doing a lot of work in this, how does large tech companies come in and collaborate with state governments? How can you enable that? What can be the steps in that?

Mr. Ranjeet Goswami

Thank you, Devroop. I think… we need to first take a holistic view of what are we trying to achieve with AI. The tagline for this summit, if we go by that, welfare for all, happiness for all is very holistically kind of captures it. As coming from Tata Group, I am reminded that 170 years back almost, our founding father, Jamshedji Tata spoke about giving us the guidance saying that society or community is not another stakeholder in the business. It is the purpose as to why the business exists in the first place. Similarly, if we were to apply the same analogy over here today, I think AI is not a technical tool which is fundamentally going to make the governance more efficient.

It is fundamentally meant for how to bring the benefit, welfare and happiness to the community at large. If we try and approach the question from that perspective, it definitely comes out. And like even Suresh alluded to, how do we make sure that it is inclusive, the people get the right benefits that they are entitled to, and do not go to somebody who is not entitled to. The colleague from the police forces also spoke about as to how the criminal tracking and other things. These are translating the intent into action at the ground level. Lastly, when it comes to organizations like DCS, we believe that each department in the government firstly should not be treated in isolation.

Each department in the government should have the ability to have a common database of people, be able to extract the information, and ensure that the citizen is seen as a citizen of the state or the country, and not as a citizen of the department. So that common databasing is something that we are trying to approach. We have the Aadhaar database. Not every department is still connected to it. Of course, we are trying to find a reason as to how that can become the major point of it. So small steps like that, and of course, bringing in the platform’s intelligence to its core. That’s fundamentally the steps that we have taken.

Mr. Devroop Dhar

Thanks, Ranjit. With that, I’ll go to Bina. Bina, I want to talk on the aspect of ethical AI biases, especially you work with women for ethical AI. How do you see biases or maybe biases around gender diversity creeping in and what needs to be done around this?

Ms. Beena Sarkar

Thank you, Debru. So, yes, I do work. I volunteer with the Women for Ethical AI South Asia chapter. It’s powered by UNESCO. So one of the key questions that we ask ourselves is what are we solving for? Every time and I’ve been looking at the various solutions that are. Debuting or being showcased as part of the. India AI mission. Many a times when we look at a particular hardware or a piece of any new device through which we are delivering what we now call as AI services, mostly on large language models, I should say that. What we sometimes seem to miss is the wood for the trees. I will just give a very hard example over here.

We do know smart glasses is not a new phenomenon. It was introduced by Google Glasses way back in 2015, I remember 2013, 2015, 2016, about that range. One of the reasons why it was recalled was you had safety concerns because people were taking images, videos without consent. we have seen the return of these glasses and those concerns have not gone away and yet you see them in the market you see it in India being sold in every in any optic store, in my neighborhood optic store I have my colleagues who flaunt it saying that I am so cool, I have taken the latest piece of technology so when we talk about how do you build ethics and governance Yashasvi sir said you are the best, you are the best framework.

So what this means is we are not giving guns to everybody. Right. India has been very, very smart about it. Of course, there are certain countries where owning a gun is is fine. It’s as for their rights. That doesn’t mean India has to adopt it. Right. We have our very able police force. We have the Indian army. Right. So what is exciting outside? One needs to contextualize it, humanize it, see if it is threatening 50 percent of your population. I’m a part of that population and then decide whether it even needs to exist. In that market. So when you’re building out solutions and when you’re building out devices, we have the India Safety Institute. It has been instituted in 2025.

I do know that. What I would urge policymakers is that it should not, while we do have it and I do know we are working with Research Institute, industry, industry, ideally, if any new device comes like this, the first line of defense, so to speak, should be this institute. That actually should determine whether it actually creates a problem for the police force, for cyber security, right? Does it threaten 50 % of the population? We are already seeing it playing out in the UK, US. There is no policy that protects us. Even now, we are not protected as women. Leave women, even children, right? So I think that is something one really needs to take into consideration. While we love technology, trust me, as a lady, I find it extremely liberating to be able to create applications with just language.

But if you use that technology against us by bringing out devices and hardware that endanger us, I feel that’s where it breaks down. So you definitely, as part of ethics, you need to evaluate it from that framework. I call it the Kali versus the Rakta Bija effect. I’m sure some of you know that. Why would you create a Rakta Bija? Create a Kali.

Mr. Devroop Dhar

So, Beena, I think a very valid point that you have made. With that, I’ll move to Dr. Amit Kapoor. We’re talking of AI and its impact and benefits. How do you see this benefit percolating? Next level of cities, tier 2, tier 3, other places.

Dr. Amit Kapoor

So, Devru, this is a very important question. And as I was hearing all the panelists here, I would like to rake a few points. We definitely agree that, yes, AI can be transformational. But we have to understand a couple of points here. One of them is that, S2, what is the quality of education that we are giving so that people are able to use it in the right earnest? In fact, the issue that Bina was talking about is about ethics, education, and so on and so forth. The larger point here is when you talk about the skill development levels in Maharashtra, out of the 100 % or 9 crore workforce that you have, only about 20 % of them are at skill level 3 and 4.

80 % of them are at skill level 1 and 2. So if you have to move beyond that, you need to do something far greater. That means you need to embed your education system and build it very strongly. And then the second point out here is that how do we use, and if you really want to talk about tier 1 and 2, sorry, tier 2 and tier 3 cities, we have to also understand that what is the level of penetration and quality of internet in these locations. We can tom -tom about internet and everything, but the numbers are not very supportive right now. And you talk about internet and broadband connectivity. There are severe issues with this within the state of Maharashtra itself, which is supposedly one of the finest states in terms of internet connectivity itself.

The average speed of internet traffic in Bombay or in Mumbai is about 58 Mbps on a broadband network. When you are talking about usage of AI, and if you want to take it to the masses, then you have to have far better, deeper internet connectivity and broadband. I think it has to be done on a war footing. Not that we are not getting there, but it will have to be done faster and quicker. The second thing is going to be about inadequacy of supporting infrastructure. So this is where I also see that there is a tremendous level of opportunity that exists in Maharashtra to create this. Because if you really look at it, like 16 % of India’s workforce, IT workforce, or what you call a technology workforce sits in one single city in Maharashtra.

That is Pune. So if you’re really talking about it, that means Maharashtra has the potential, the talent to really take it to the next level. And that is where you have to build infrastructure. You will have to build what you call, when you talk about the data centers, et cetera, the opportunity does exist and things. And last but not the least is about, it’s going to be about cost and affordability. You will have to bring cost and affordability to these services as you go along. But having said that, I think the larger potential, we have not touched here. And when you talk about Tier 2 and Tier 3 cities, this technology has a huge possible impact that can be done at Tier 2 and Tier 3 cities, and that is about nutrition.

Today, Maharashtra actually has a problem on nutrition. Fifty percent of the people in Maharashtra are malnourished even today. How do I use this technology to assess what is happening in my PIN code level or a smaller level of geography in various cities? And location. The second thing is about water and sanitation, access to basic knowledge. What about access to can AI solve my education problem in tier two and tier three cities? In fact, none of us is talking about the elephant in the room. And that elephant in the room is that we are all super excited about AI, but we are not understanding that AI is also going to be the biggest dumping down element for the society.

In fact, when you talk about AI itself, it is going to make bonobos out of us. People out here, when you actually use the doom scrolling, when you talk about Instagram, what is happening to our children? And that is exactly what AI is going to do. How do we set our education system right? That’s what you have to do in tier two and tier three cities. Last point, and that is about the higher education space itself. When you talk about your workforce, and close to about 50 % of it is underemployed. So I have to disagree with Praveen for one small point. He made a very powerful point in terms of saying that how after 2020, there has been a transformation.

In terms of how people are getting jobs, et cetera. I do agree with. But the larger point here is that we are also not preparing our workforce right, and that is happening in Tier 2 and Tier 3 cities. So we need to take it there. Potential exists. As of today, Maharashtra is the engine of growth in India. We cannot debate that. Even today, it is about close to 17 % to 18 % of India’s GDP, and it will define India’s growth story in the future, definitely. But if Maharashtra does it right, then the country can follow suit on this. And that is where things have to be.

Mr. Devroop Dhar

Thank you, Dr. Amit. And thanks to all the panelists. So with that, we’ll come to the end of the panel discussion.

Moderator

Thank you so much. Thank you to all our esteemed panelists and senior officers who are here. May I request all our panelists to just step forward for a photo? Very interesting, sir. May I request you to join our esteemed panelists? on this piece. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Artificial intelligence is already reshaping governance, markets and geopolitics, and the crucial question is how governance will shape AI.”

The knowledge base explicitly states that AI is influencing governance, markets, public services and geopolitics, and that the key question is whether governance will shape artificial intelligence [S1].

Additional Contextmedium

“Singh’s concept of “intelligent governance” built on human‑centred design, transparency, accountability, risk‑based and adaptive regulations, and global cooperation.”

S108 discusses building trust in AI governance through practical regulatory design and stakeholder responsibilities, adding nuance to Singh’s description of intelligent, risk‑based, adaptive governance.

Confirmedhigh

“Ashish Shelar presented Maharashtra’s “Maha AI” programme as a living laboratory for AI‑enabled public services.”

The source titled “MahaAI Building Safe Secure & Smart Governance” confirms the existence of the Maha AI initiative and its focus on safe, secure, smart governance [S2].

Confirmedmedium

“The AI Impact Summit 2026, the first in the global series to be hosted in the Global South, brought together more than 20 heads of state, 60 ministers and hundreds of AI leaders.”

Press briefings about the AI Impact Summit 2026 acknowledge the summit’s occurrence and its international participation, confirming the event’s reality though not the exact attendance numbers [S116].

Additional Contextlow

“Maharashtra’s AI stack rests on five pillars – compute and cloud at scale, high‑quality public data sets, state AI governance, interoperable standards and capacity building.”

A separate five-pillar framework for AI governance is described in the knowledge base for France’s digital strategy, showing that multi-pillar approaches are a recognized model for AI policy design [S22].

External Sources (118)
S1
MahaAI Building Safe Secure & Smart Governance — Mr. Virendra Singh established the intellectual foundation by reframing the central question facing policymakers. Rather…
S2
MahaAI Building Safe Secure & Smart Governance — – Mr. Virendra Singh- Dr. Amit Kapoor
S3
MahaAI Building Safe Secure & Smart Governance — – Mr. Praveen Pardeshi- Mr. Ranjeet Goswami- Mr. Suresh Sethi – Mr. Virendra Singh- Mr. Suresh Sethi
S4
MahaAI Building Safe Secure & Smart Governance — Speakers:Mr. Praveen Pardeshi, Mr. Ranjeet Goswami, Mr. Suresh Sethi Speakers:Mr. Suresh Sethi, Mr. Praveen Pardeshi S…
S5
MahaAI Building Safe Secure & Smart Governance — -Mr. Ashish Shelar- Role/Title: Honorable Minister of IT and Cultural Affairs, Government of Maharashtra, Area of expert…
S6
AI Meets Agriculture Building Food Security and Climate Resilien — May I invite Dr. Devish Chaturvedi, Secretary, Ministry of Agriculture and Farmers’ Welfare. Sir, please come onto the s…
S7
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S8
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S9
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S10
MahaAI Building Safe Secure & Smart Governance — – Mr. Ashish Shelar- Mr. Praveen Pardeshi- Mr. Yashasvi Yadav
S11
MahaAI Building Safe Secure & Smart Governance — Speakers:Mr. Ashish Shelar, Mr. Praveen Pardeshi, Mr. Yashasvi Yadav Speakers:Mr. Yashasvi Yadav Speakers:Mr. Yashasvi…
S12
MahaAI Building Safe Secure & Smart Governance — -Dr. Amit Kapoor- Role/Title: Chair, Institute for Competitiveness, Area of expertise: Economic policy, competitiveness,…
S13
MahaAI Building Safe Secure & Smart Governance — -Major Ranjit Goswami- Role/Title: Head, Corporate Affairs, Tata Consultancy Services, Area of expertise: Technology sol…
S14
MahaAI Building Safe Secure & Smart Governance — Ranjeet Goswami from Tata Consultancy Services brought a private sector perspective informed by the Tata Group’s communi…
S15
MahaAI Building Safe Secure & Smart Governance — – Mr. Ashish Shelar- Mr. Praveen Pardeshi- Mr. Yashasvi Yadav – Mr. Suresh Sethi- Mr. Praveen Pardeshi
S16
MahaAI Building Safe Secure & Smart Governance — – Mr. Praveen Pardeshi- Ms. Beena Sarkar – Ms. Beena Sarkar- Most other panelists
S17
MahaAI Building Safe Secure & Smart Governance — Raised by:Ms. Beena Sarkar
S18
MahaAI Building Safe Secure & Smart Governance — -Mr. Ashish Shelar- Role/Title: Honorable Minister of IT and Cultural Affairs, Government of Maharashtra, Area of expert…
S19
Keynote-Rishad Premji — -Mr. Dario Amote: Role/Title: Not specified; Area of expertise: Artificial intelligence (described as pioneer and though…
S20
https://dig.watch/event/india-ai-impact-summit-2026/mahaai-building-safe-secure-smart-governance — Thanks, Devroop. I think from a DPI perspective, we are all very familiar. We today have population -scale digital rates…
S21
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S22
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — So two years ago, the French Prime Minister’s Digital Directorate elaborated a strategy based on five pillars. The first…
S23
Global AI adoption boosted by Infosys and Microsoft — Infosys and Microsoftare expanding their collaborationto drive the global adoption of generative AI and Microsoft Azure….
S24
Deepfakes and the AI scam wave eroding trust — Deepfakes force an uncomfortable reassessment of how trust works online. For decades,digital technologiesexpanded access…
S25
The role of AI in fighting deepfakes and misinformation — Deepfakes and misinformation have emerged as significant threats in the digital age. Deepfakes, created using AI techniq…
S26
WS #255 AI and disinformation: Safeguarding Elections — Addressing Disinformation Ayobangira Safari Nshuti Need for increased transparency and accountability from platforms …
S27
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S28
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — Audience: Thank you so much. My name is Zemizna Atareki and I’m representing the Saudi Green Building Forum, which i…
S29
Building Population-Scale Digital Public Infrastructure for AI — Well, it’s difficult to choose only one thing, I guess. Maybe this perspective from management, you’re always looking fo…
S30
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — So I think, firstly, India’s journey in DPIs has been a fascinating one. It makes me immensely proud that whichever coun…
S31
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And accessibility has to be also broadened in terms of multi -modality and also, where necessary, include a human in the…
S32
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — While disagreeing that governance is dead, Curioni acknowledges that governance and regulation must evolve significantly…
S33
https://app.faicon.ai/ai-impact-summit-2026/mahaai-building-safe-secure-smart-governance — The philosophy is simple. Use AI not to distance the state from the citizens, but to make governance more human, faster,…
S34
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Galia Daor:Yeah, thanks very much. I admit it’s a bit challenging to speak after Allison on that front, but I will try, …
S35
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — 4. Enhancing capacity building initiatives Chris Odu: Thank you for that very, very elaborate explanation, Binti. It’s …
S36
The Foundation of AI Democratizing Compute Data Infrastructure — “So we are identifying agriculture, education, healthcare, and some more.”[83]. “So inspire them that they can really do…
S37
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S38
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S39
Driving Enterprise Impact Through Scalable AI Adoption — Summary:Both speakers agree that AI is a powerful tool but emphasize the need for human oversight, critical thinking, an…
S40
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S41
Elevating AI skills for all — Skills gap and workforce transformation challenges The rapid pace of technological change is creating a significant ski…
S42
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S43
The Right to Data for Development (Bluenumber) — Governments can be effective data stewards if they recognise the value of data and invest in data stewardship. However, …
S44
Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa — Souhila Amazouz: Thank you. Good morning. Do you hear me? Yes, yes. Yes, good morning, everybody. And thank you, m…
S45
NRIs MAIN SESSION: DATA GOVERNANCE — Another significant point discussed is the positive impact of open data on local development and society. The speakers p…
S46
The Role of Government and Innovators in Citizen-Centric AI — Summary:There is unanimous agreement that AI can transform public services by making them more accessible, personalized,…
S47
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — man’s promise. It can enhance public service delivery, it can improve decision -making, it can optimize resource managem…
S48
AI for Democracy_ Reimagining Governance in the Age of Intelligence — It should be used by everyone. And that’s why the second title, like in the second part of our theme is reimagining gove…
S49
Vers un indice de vulnérabilité numérique (OIF) — Furthermore, data utilisation and management regulations play an instrumental role in addressing digital threats. The mo…
S50
Regulating Open Data_ Principles Challenges and Opportunities — A sort of symbolic nod to open data. It can turn into an unguarded channel through which value, agency and even sovereig…
S51
Panel #3: « Gouverner les données : entre souveraineté, éthique et sécurité à l’ère de l’interconnexion » — Drudeisha Madhub Merci. Je vous remercie pour la question. En fait, oui, on a considéré tous ces éléments en fabriquant …
S52
https://app.faicon.ai/ai-impact-summit-2026/mahaai-building-safe-secure-smart-governance — 80 % of them are at skill level 1 and 2. So if you have to move beyond that, you need to do something far greater. That …
S53
MahaAI Building Safe Secure & Smart Governance — So, Devru, this is a very important question. And as I was hearing all the panelists here, I would like to rake a few po…
S54
Building Population-Scale Digital Public Infrastructure for AI — bought which farmers use and millions of farmers today, 2 .5 million farmers have downloaded this app. And this was buil…
S55
Regional Leaders Discuss AI-Ready Digital Infrastructure — Do we expect India to put so much money on foundation to have a ground-level impact? But India has a scale. So what we n…
S56
MahaAI Building Safe Secure & Smart Governance — “Bitcoin can be broken Even credit card Encryptions can be broken Banking systems, encryptions can be broken So right no…
S57
UNSC meeting: Scientific developments, peace and security — Quantum computing could endanger global cybersecurity The speaker addressed the UN Security Council on emerging threats…
S58
Opening of the session/OEWG 2025 — Malaysia: Mr Chair, I would like to begin by expressing my delegation’s appreciation to you for convening this substan…
S59
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S60
Multi-stakeholder Discussion on issues about Generative AI — Thus, collaboration, dialogue, and capacity-building around AI are encouraged. Collaboration is necessary due to the cro…
S61
How to make AI governance fit for purpose? — Given that AI technologies are inherently global, effective governance requires international engagement and cooperation…
S62
Main Session | Policy Network on Artificial Intelligence — Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests t…
S63
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — Analysis of context-context is crucial, I will say in two words why. Then we go to stakeholder engagement, and this morn…
S65
Open Forum #3 Cyberdefense and AI in Developing Economies — Capacity Building and Human Resources Effective capacity building requires training at multiple levels – technical trai…
S66
The Role of Government and Innovators in Citizen-Centric AI — Capacity development | Artificial intelligence Skills, Reskilling, and Acceptance
S67
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclu…
S68
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S69
MahaAI Building Safe Secure & Smart Governance — Artificial intelligence is real and it is influencing governance, markets, public services and even geopolitics. The que…
S70
MahaAI Building Safe Secure & Smart Governance — His solution advocated for “intelligent governance” built upon five core principles: human-centred design, transparency …
S71
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — While disagreeing that governance is dead, Curioni acknowledges that governance and regulation must evolve significantly…
S72
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — Honourable Minister, Honourable Ministers, Excellencies, Ladies and Gentlemen, I am deeply honoured to be with you today…
S73
Building Inclusive Societies with AI — When asked about government initiatives, Manisha Verma, Additional Chief Secretary of Maharashtra’s SEED Department, out…
S74
WS #100 Integrating the Global South in Global AI Governance — Roeske Martin: Thanks Fadi, great question. So I think you made a great point that came out in the research which was …
S75
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — 4. Enhancing capacity building initiatives Chris Odu: Thank you for that very, very elaborate explanation, Binti. It’s …
S76
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Galia Daor:Yeah, thanks very much. I admit it’s a bit challenging to speak after Allison on that front, but I will try, …
S77
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Pellerin Matis: Yeah, I mean, AI is a top priority for governments, as you said. But we need to be realistic, because…
S78
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Atsuko Okuda:Asko. Thank you very much for giving… Thank you. First of all, I would like to thank the organizer to inv…
S79
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S80
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S81
Ethical AI_ Keeping Humanity in the Loop While Innovating — Debjani emphasizes that humans must take accountability for AI ethics rather than delegating responsibility to technolog…
S82
Ethical AI_ Keeping Humanity in the Loop While Innovating — So I think the accountability on humans is what we have to focus on. And going back to your question, if you’re talking …
S83
AI promises, ethics, and human rights: Time to open Pandora’s box — Bias, discrimination, and fairness: Are biases being propagated with data sets used to train algorithms? How transparent…
S84
Elevating AI skills for all — Skills gap and workforce transformation challenges The rapid pace of technological change is creating a significant ski…
S85
AI 2.0 The Future of Learning in India — Despite optimistic visions, significant challenges remain. The infrastructure gap between urban and rural areas requires…
S86
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S87
Empowering India & the Global South Through AI Literacy — The discussion acknowledged several ongoing challenges. The scale required to reach India’s vast educational system pres…
S88
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S89
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S90
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S91
DC-BAS: Blockchain Assurance for the Internet We Want and Can Trust — The overall tone was optimistic and forward-looking. Speakers were enthusiastic about the potential of these technologie…
S92
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S93
Host Country Open Stage — The tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a profes…
S94
Host Country Open Stage — The tone throughout the discussion was consistently optimistic, professional, and solution-oriented. All speakers presen…
S95
How Small AI Solutions Are Creating Big Social Change — The discussion maintained a consistently optimistic and collaborative tone throughout. Panelists demonstrated mutual res…
S96
Responsible AI in India Leadership Ethics & Global Impact — The tone was professional and pragmatic throughout, with speakers sharing concrete examples and practical insights rathe…
S97
Pathways to De-escalation — The overall tone was serious and somewhat cautious, reflecting the gravity of cybersecurity challenges. While the speake…
S98
Cutting through Cyber Complexity / DAVOS 2025 — The tone of the discussion was largely serious and concerned, given the gravity of cybersecurity threats. However, there…
S99
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S100
Lightning Talk #136 The Embodied Web: Rethinking Privacy in 3D Computing — The overall tone was informative and cautionary. The speaker presented the topic with a sense of urgency, emphasizing bo…
S101
Defending the Cyber Frontlines / Davos 2025 — The discussion began with a serious, concerned tone as panelists outlined cyber threats and challenges. As the conversat…
S102
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — ## The Urgency of Action: Skills Gaps and Infrastructure Deficits
S103
Regional Leaders Discuss AI-Ready Digital Infrastructure — The discussion maintained a consistently optimistic yet pragmatic tone throughout. Panelists were enthusiastic about AI’…
S104
Building the AI-Ready Future From Infrastructure to Skills — And so I think that it’s likely announcements that suggest that countries like Japan and Europe and UK and others may be…
S105
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — The tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists reveale…
S106
AI as critical infrastructure for continuity in public services — “I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole …
S107
AI for Democracy_ Reimagining Governance in the Age of Intelligence — at the AI Summit here in Delhi. I am deeply honored to be here today in the presence of the honorable speaker to address…
S108
Who Watches the Watchers Building Trust in AI Governance — Impact:This reframing shifted the conversation away from broad comparisons of national approaches toward more specific d…
S109
AI for agriculture Scaling Intelegence for food and climate resiliance — A lot of questions in the same question. So what I’ll do is I’ll just first take you through the initiatives. First of a…
S110
Harnessing Collective AI for India’s Social and Economic Development — Antaraa Vasudev describes how AI can facilitate massive citizen participation in governance through tools like chatbots …
S111
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Respected Honorable Chairman, Distinguished Speakers, Eminent Guests, Colleagues and Participants. It is my privilege to…
S112
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Sir explained how large -scale data integration and cross -validation enable risk -based proactive and real -time compli…
S113
A Conversation with Satya Nadella and Klaus Schwab — He expresses his gratitude for the cooperation with Accenture, indicating a collaborative relationship. Satya has been …
S114
https://app.faicon.ai/ai-impact-summit-2026/ai-driven-enforcement_-better-governance-through-effective-compliance-services — Sir explained how large -scale data integration and cross -validation enable risk -based proactive and real -time compli…
S115
Catalyzing Cyber: Stimulating Cybersecurity Market through Ecosystem Development — In 2020, Malaysia established a cybersecurity strategy with a five-year plan to create a secure, trusted, and resilient …
S116
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — Congratulations on the declaration, sir. I just wanted to know, could you give us names of some of the countries that ha…
S117
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — Congratulations on the declaration, sir. I just wanted to know, could you give us names of some of the countries that ha…
S118
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — No, it’s an important viewpoint because there is this idea that governments need to act. They need to protect citizens. …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Mr. Virendra Singh
3 arguments110 words per minute371 words201 seconds
Argument 1
Intelligent governance principle emphasizing human‑centered design, transparency, risk‑based regulation, and global cooperation (Virendra Singh)
EXPLANATION
Singh argues that AI is already reshaping governance and that the response should not be a binary choice between control and innovation. Instead, governance must be intelligent, embedding human‑centered design, transparency, accountability, risk‑based rules and international cooperation.
EVIDENCE
He states that the answer is not control versus innovation but intelligent governance, and outlines that AI governance should include human-centered design, transparency, accountability, risk-based regulations, global cooperation and adaptive policies [11-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The five core principles of intelligent governance-human-centred design, transparency, accountability, risk-based regulation and global cooperation-are outlined in [S1] and reinforced in [S2].
MAJOR DISCUSSION POINT
Intelligent governance principle
AGREED WITH
Mr. Ashish Shelar, Mr. Suresh Sethi, Mr. Devroop Dhar
DISAGREED WITH
Mr. Ashish Shelar, Mr. Praveen Pardeshi, Mr. Yashasvi Yadav, Mr. Ranjeet Goswami
Argument 2
AI governance faces a paradox: regulating too slowly risks harm, while over‑regulation risks stagnation.
EXPLANATION
Singh warns that the speed of policy action must be balanced; delayed regulation can allow AI‑related harms to materialise, whereas heavy‑handed rules can choke innovation and economic benefits.
EVIDENCE
He repeats “Regulate too slowly and risk harm” twice and adds “Regulate too heavily and risk stagnation,” highlighting the tension between speed and stringency of regulation [8-10].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The regulation-speed trade-off and the need to balance innovation with protection are discussed in [S2].
MAJOR DISCUSSION POINT
Governance paradox – regulation speed trade‑off
Argument 3
Because AI does not recognize borders, interoperable global frameworks and cooperative oversight are required.
EXPLANATION
Singh argues that AI’s borderless nature necessitates shared safety standards, interoperable frameworks and coordinated international governance to manage risks that transcend national jurisdictions.
EVIDENCE
He states “AI does not recognize borders,” calls for “interoperable frameworks, shared safety standards, and cooperative oversight mechanisms,” and stresses the shift from national-level policies to coordinated global norms [14-16][18-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Singh’s point that AI is borderless and requires interoperable international frameworks is highlighted in [S1].
MAJOR DISCUSSION POINT
Borderless AI requires global cooperative frameworks
M
Mr. Ashish Shelar
4 arguments99 words per minute586 words351 seconds
Argument 1
Five‑pillar model for safe, secure, smart governance: compute, public data, state AI governance, standards, capacity building (Ashish Shelar)
EXPLANATION
Shelar proposes a framework consisting of five essential pillars that together enable a safe, secure and smart AI‑driven governance stack. The pillars are compute and cloud at scale, high‑quality public data sets, state AI governance, interoperability and standards, and capacity building.
EVIDENCE
He explains that a safe, secure, smart governance stack must rest on five pillars: computers, compute and cloud at scale; high-quality public data sets; state AI governance; interoperability and standards; and capacity building [57-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The five-pillar governance stack (compute, public data, state AI governance, standards, capacity building) is described in [S2] and echoed in the panel summary [S22].
MAJOR DISCUSSION POINT
Five‑pillar governance framework
AGREED WITH
Mr. Virendra Singh, Mr. Suresh Sethi, Mr. Devroop Dhar
DISAGREED WITH
Mr. Virendra Singh, Mr. Praveen Pardeshi, Mr. Yashasvi Yadav, Mr. Ranjeet Goswami
Argument 2
Deployment of AI‑powered Mahak Crime OS and the Mahaiti cloud‑native infrastructure to integrate services and enable real‑time decision making (Ashish Shelar)
EXPLANATION
Shelar highlights Maharashtra’s implementation of an AI‑driven crime‑operating system and a cloud‑native platform that integrates multiple public services, allowing faster response and real‑time decision making across domains such as recruitment, property mapping and urban dashboards.
EVIDENCE
He cites the AI-powered Mahak Crime OS showcased by Microsoft’s Satya Nadella, which has transformed crime prevention, detection and investigation with faster response and transparent processes, and the Mahaiti cloud-native, modular, API-driven infrastructure that uses AI to integrate services, predict needs and respond in real-time across smart recruitment, property mapping, traffic, weather and civic issues [47-50].
MAJOR DISCUSSION POINT
AI‑driven public service infrastructure
AGREED WITH
Mr. Virendra Singh, Mr. Suresh Sethi, Mr. Yashasvi Yadav
DISAGREED WITH
Dr. Amit Kapoor
Argument 3
Partnerships with global technology leaders such as Microsoft to accelerate AI adoption in public safety and governance (Ashish Shelar)
EXPLANATION
Shelar emphasizes the importance of collaborating with leading global technology firms to bring cutting‑edge AI capabilities into government operations, citing a concrete partnership with Microsoft for the Mahak Crime OS.
EVIDENCE
He notes that the Mahak Crime OS was showcased by Microsoft’s Satya Nadella, illustrating a partnership with a global technology leader to accelerate AI adoption in public safety and governance [47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaboration with Microsoft on AI-driven public-safety solutions is mentioned in [S2] and further illustrated by the Infosys-Microsoft partnership in [S23].
MAJOR DISCUSSION POINT
Global tech partnerships
AGREED WITH
Mr. Virendra Singh, Mr. Ranjeet Goswami
Argument 4
Combating AI‑enabled digital threats such as disinformation, deepfakes and fraud requires robust cybersecurity, digital literacy and a hybrid verification ecosystem.
EXPLANATION
Shelar highlights that AI can amplify misinformation and financial scams, which can destabilise democracies and markets, and proposes a three‑pronged response that blends security measures, citizen education and verification tools.
EVIDENCE
He mentions “Disinformation, deepfakes, AI-generated fraud and cyberattacks can undermine democracies, markets and communities” and outlines the response as “robust cyber security, digital literacy and critical thinking, hybrid verification ecosystem” [61-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The risks posed by deepfakes, disinformation and AI-generated fraud and the need for cybersecurity, digital literacy and verification mechanisms are covered in [S24], [S25] and [S26].
MAJOR DISCUSSION POINT
Combating AI‑enabled digital threats
M
Mr. Suresh Sethi
2 arguments162 words per minute643 words237 seconds
Argument 1
Call for explainable, auditable AI with human redress mechanisms to prevent inclusion/exclusion errors (Suresh Sethi)
EXPLANATION
Sethi stresses that AI systems used in governance must be transparent and accountable, providing explanations for decisions, audit trails, and a human‑in‑the‑loop for redress, especially to avoid inclusion (leakage) and exclusion errors in benefit delivery.
EVIDENCE
He argues that AI should be explainable, so decisions to deny or grant benefits must be clearly justified, that an audit layer is required to track actions, and that a human redress pathway is essential because machines cannot handle everything alone [184-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity for explainable, auditable AI systems with human-in-the-loop redress is emphasized in [S27] and supported by inclusion considerations in [S31].
MAJOR DISCUSSION POINT
Explainable and auditable AI with human redress
AGREED WITH
Mr. Virendra Singh, Ms. Beena Sarkar
DISAGREED WITH
Mr. Praveen Pardeshi
Argument 2
Use of AI over population‑scale Digital Public Infrastructure to enable dynamic eligibility, predictive subsidies, and reduction of inclusion/exclusion errors (Suresh Sethi)
EXPLANATION
Sethi describes how AI layered on top of India’s massive digital public infrastructure can shift from static identity to dynamic eligibility, allowing predictive governance that targets subsidies accurately and minimizes both leakage and exclusion.
EVIDENCE
He explains that AI can move from static identity to dynamic eligibility using digitally verifiable credentials, enabling predictive governance to trigger benefits when income distress is detected, and that AI can help reduce inclusion (leakage) and exclusion errors while requiring explainability, auditability and human redress [166-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI layered on top of India’s population-scale Digital Public Infrastructure for dynamic eligibility and predictive governance is discussed in [S30] and further contextualised in [S31].
MAJOR DISCUSSION POINT
AI‑enabled predictive governance on DPI
AGREED WITH
Mr. Virendra Singh, Mr. Ashish Shelar, Mr. Yashasvi Yadav
M
Ms. Beena Sarkar
1 argument116 words per minute592 words306 seconds
Argument 1
Advocacy for ethical AI safeguards, gender‑bias assessment, and a safety institute to vet new devices (Beena Sarkar)
EXPLANATION
Sarkar calls for systematic ethical safeguards in AI, highlighting the need to assess gender bias and to establish a dedicated safety institute that evaluates new AI‑enabled devices for potential threats to privacy and safety, especially for women.
EVIDENCE
She describes her work with the Women for Ethical AI South Asia chapter, notes the example of smart glasses that raised privacy concerns, urges policymakers to let the India Safety Institute (established in 2025) vet new devices for potential threats to public safety and gender equity, and stresses evaluating whether a technology endangers 50 % of the population before deployment [221-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sarkar’s call for an India Safety Institute to evaluate new AI-enabled devices and address gender-bias and safety concerns is documented in [S1].
MAJOR DISCUSSION POINT
Ethical AI and gender‑bias safeguards
AGREED WITH
Mr. Virendra Singh, Mr. Suresh Sethi
M
Mr. Devroop Dhar
1 argument46 words per minute393 words510 seconds
Argument 1
Emphasis on multi‑stakeholder dialogue and coordinated oversight as essential to effective AI governance (Devroop Dhar)
EXPLANATION
Dhar underscores that effective AI governance requires ongoing dialogue among governments, industry, academia and civil society, coupled with coordinated oversight mechanisms to ensure responsible AI deployment.
MAJOR DISCUSSION POINT
Multi‑stakeholder dialogue for AI governance
AGREED WITH
Mr. Ashish Shelar, Mr. Praveen Pardeshi, Dr. Amit Kapoor
M
Mr. Praveen Pardeshi
2 arguments171 words per minute601 words209 seconds
Argument 1
Investment in green energy and AI‑focused capacity building for government staff; creation of “Maha GPT” to parse complex regulations (Praveen Pardeshi)
EXPLANATION
Pardeshi argues that sustainable AI deployment depends on green energy supply and building AI competence among civil servants, exemplified by the development of a specialized “Maha GPT” tool to help officials and citizens navigate complex government orders.
EVIDENCE
He mentions pushing for over 19,000 MW of solar energy as fuel for future AI [81-84], describes capacity-building initiatives such as AI university courses for staff and online courses on IGOT [85-86], and details the Maha GPT project that uses a small language model to let government officers and citizens query the latest positions on regulations, orders and Supreme Court rulings [108-114].
MAJOR DISCUSSION POINT
Green energy, capacity building, and Maha GPT
DISAGREED WITH
Mr. Virendra Singh, Mr. Ashish Shelar, Mr. Yashasvi Yadav, Mr. Ranjeet Goswami
Argument 2
Establishment of a State Data Authority to monetize public data, e.g., health records, while protecting national interests (Praveen Pardeshi)
EXPLANATION
Pardeshi proposes a State Data Authority that would consolidate valuable public data—such as health records—to enable commercial use that benefits the government rather than external actors, thereby turning data into a strategic asset.
EVIDENCE
He cites examples from pharmaceuticals, noting that India’s large health-data pool is valuable to pharma companies, and explains that the State Data Authority is working to make this data a single source of proof and to commercialise it for the government’s benefit rather than allowing free external exploitation [98-102].
MAJOR DISCUSSION POINT
Monetising public data through State Data Authority
M
Mr. Yashasvi Yadav
3 arguments129 words per minute756 words350 seconds
Argument 1
Maharashtra Cyber Security Project using AI for dark‑web monitoring, threat analysis, and rapid response, resulting in large fraud recoveries and lives saved (Yashasvi Yadav)
EXPLANATION
Yadav describes a state‑run cyber security initiative that leverages AI tools to monitor the dark web, analyse threats and provide a single‑call helpline, leading to significant financial recoveries and the prevention of suicides among victims of cyber‑bullying and sextortion.
EVIDENCE
He explains that the project combines top-tier AI tools, consultants and police officers under one roof, offers a 1930 helpline serviced by over 150 consultants, and reports that within six months more than ₹1,000 crore of fraud money was frozen and returned, while 70 young women were rescued from cyber-bullying and sextortion, saving 70 lives [121-136].
MAJOR DISCUSSION POINT
AI‑driven cyber security and fraud recovery
AGREED WITH
Mr. Virendra Singh, Mr. Ashish Shelar, Mr. Suresh Sethi
DISAGREED WITH
Mr. Virendra Singh, Mr. Ashish Shelar, Mr. Praveen Pardeshi, Mr. Ranjeet Goswami
Argument 2
AI‑driven threat‑intelligence tools (Luminar, Cognite, Pathfinder) that thwart nation‑state cyber attacks and mitigate emerging cyber‑warfare risks (Yashasvi Yadav)
EXPLANATION
Yadav highlights the use of advanced AI‑based threat‑intelligence platforms that have successfully blocked large‑scale nation‑state cyber attacks during the Pahalgam incident, demonstrating AI’s role in national security.
EVIDENCE
He references the “Echoes of Pahalgam” report, noting that AI tools such as Luminar, Cognite and Pathfinder were employed to thwart over one million cyber attacks launched by nation-state actors from several countries, with traces still visible in the dark web [139-142].
MAJOR DISCUSSION POINT
AI threat‑intelligence against cyber warfare
Argument 3
Warning that quantum computing could break current encryption standards (RSA, blockchain) and destabilize financial systems, urging urgent preparedness (Yashasvi Yadav)
EXPLANATION
Yadav warns that the advent of quantum computing threatens to break widely used encryption schemes, potentially compromising banking, blockchain and financial stability, and calls for accelerated national investment to catch up with global quantum efforts.
EVIDENCE
He explains that quantum computers can process hundreds of millions of qubits at speeds that could break RSA, blockchain and other encryptions within seconds, jeopardising financial systems, and notes that countries like China have invested $15-20 billion while India has only invested $1 billion, urging rapid preparedness [145-150].
MAJOR DISCUSSION POINT
Quantum computing risks to encryption
M
Mr. Ranjeet Goswami
3 arguments156 words per minute361 words138 seconds
Argument 1
Push for a unified citizen database (Aadhaar integration) across departments to avoid siloed operations (Ranjit Goswami)
EXPLANATION
Goswami argues that all government departments should share a common citizen database, leveraging Aadhaar, to ensure citizens are treated uniformly across the state rather than as separate departmental entities.
EVIDENCE
He states that each department should have a common database of people, that the Aadhaar database exists but not every department is connected to it, and that linking all departments to Aadhaar is a key step toward unified citizen services [210-215].
MAJOR DISCUSSION POINT
Unified citizen database across departments
Argument 2
Advocacy for large tech firms (e.g., TCS) to work holistically with government, sharing common data platforms and embedding intelligence across all departments (Ranjit Goswami)
EXPLANATION
Goswami calls for large technology companies to collaborate with the government in a holistic manner, providing shared data platforms and embedding AI intelligence throughout all departmental processes to achieve welfare and happiness for all.
EVIDENCE
He emphasizes taking a holistic view of AI’s purpose for welfare, stresses that each department should not be isolated but share a common database, mentions the need for platforms like TCS to integrate intelligence across departments, and cites the importance of common data platforms and holistic collaboration [200-217].
MAJOR DISCUSSION POINT
Holistic collaboration with large tech firms
Argument 3
AI should be framed as a purpose‑driven tool for welfare and happiness, not merely a technical efficiency enhancer.
EXPLANATION
Goswami stresses that the ultimate aim of AI deployment is to deliver societal welfare and happiness, urging a shift from viewing AI as a pure efficiency device to a means of achieving broader human well‑being.
EVIDENCE
He states that “AI is not a technical tool which is fundamentally going to make the governance more efficient. It is fundamentally meant for how to bring the benefit, welfare and happiness to the community at large” [204-206].
MAJOR DISCUSSION POINT
AI as purpose‑driven tool for welfare
D
Dr. Amit Kapoor
3 arguments202 words per minute911 words269 seconds
Argument 1
Leveraging AI to address nutrition, water, sanitation, and education challenges in Tier‑2 and Tier‑3 cities; highlighting gaps in internet connectivity and skill development (Amit Kapoor)
EXPLANATION
Kapoor points out that AI can be used to tackle basic public‑health and infrastructure issues in less‑developed regions, but stresses that inadequate internet speed, low digital skills and poor infrastructure currently limit such impact.
EVIDENCE
He notes that 50 % of Maharashtra’s population is malnourished, and proposes using AI to assess nutrition, water and sanitation at PIN-code level; he also highlights that only 20 % of the workforce has high-skill levels, broadband speeds average 58 Mbps in Mumbai, and that these gaps hinder AI’s potential in Tier-2 and Tier-3 cities [292-300].
MAJOR DISCUSSION POINT
AI for basic services in underserved cities
DISAGREED WITH
Mr. Ashish Shelar
Argument 2
Call for extensive education, upskilling, and affordable broadband to ensure equitable AI benefits across the state (Amit Kapoor)
EXPLANATION
Kapoor argues that scaling AI benefits requires massive investment in digital skills, improving internet connectivity and making AI services affordable, especially for the large portion of the workforce that currently lacks advanced skills.
EVIDENCE
He cites data showing only 20 % of the 9-crore workforce is at skill levels 3-4, points out the need for better internet speed and broadband, stresses the urgency of building infrastructure such as data centres, and calls for cost-effective AI services to reach the masses [270-283].
MAJOR DISCUSSION POINT
Education, upskilling and broadband for equitable AI
Argument 3
Unchecked AI can become a societal dumping ground, fostering harmful content consumption and mental‑health risks, thus requiring safeguards.
EXPLANATION
Kapoor warns that AI may exacerbate doom‑scrolling, degrade children’s well‑being and turn society into “bonobos,” calling for education reforms and regulatory measures to mitigate these negative social impacts.
EVIDENCE
He remarks that “AI is going to make bonobos out of us,” cites “doom scrolling” and its effect on children, and stresses the need to set the education system right to prevent these harms [300-304].
MAJOR DISCUSSION POINT
AI as societal dumping ground and mental‑health risk
M
Moderator
1 argument47 words per minute272 words342 seconds
Argument 1
Facilitating inclusive, multi‑stakeholder dialogue is essential for effective AI governance.
EXPLANATION
The moderator repeatedly thanks and welcomes a wide range of participants – government leaders, industry executives, academics and civil society – and explicitly invites them to share the stage, signalling that collaborative discussion across sectors is a cornerstone of responsible AI policy.
EVIDENCE
He thanks the honored leader and requests the Minister of IT to give a keynote address, then later calls all panelists to join the stage, emphasizing the inclusion of diverse voices in the conversation [34-35][68-73].
MAJOR DISCUSSION POINT
Multi‑stakeholder dialogue as foundation for AI governance
Agreements
Agreement Points
A comprehensive, human‑centered AI governance framework that is transparent, accountable, risk‑based and supported by global cooperation.
Speakers: Mr. Virendra Singh, Mr. Ashish Shelar, Mr. Suresh Sethi, Mr. Devroop Dhar
Intelligent governance principle emphasizing human‑centered design, transparency, risk‑based regulation, and global cooperation (Virendra Singh) Five‑pillar model for safe, secure, smart governance: compute, public data, state AI governance, standards, capacity building (Ashish Shelar) Call for explainable, auditable AI with human redress mechanisms to prevent inclusion/exclusion errors (Suresh Sethi) Emphasis on multi‑stakeholder dialogue and coordinated oversight as essential to effective AI governance (Devroop Dhar)
All four speakers stress that AI governance must move beyond a binary control-vs-innovation debate to an intelligent, human-centred regime that ensures transparency, accountability, risk-based rules and international cooperation [11-13][57-58][184-190][75].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with emerging global AI policy consensus calling for human-centred, transparent, accountable and risk-based governance and multilateral cooperation, as outlined in the Global AI Policy Framework and discussions on AI governance at international forums [S68][S61][S47][S46].
Capacity building and skill development are essential for effective AI adoption in government.
Speakers: Mr. Ashish Shelar, Mr. Praveen Pardeshi, Dr. Amit Kapoor, Mr. Devroop Dhar
Five‑pillar model for safe, secure, smart governance … capacity building (Ashish Shelar) Investment in green energy and AI‑focused capacity building for government staff; creation of “Maha GPT” (Praveen Pardeshi) Call for extensive education, upskilling, and affordable broadband to ensure equitable AI benefits (Amit Kapoor) Emphasis on multi‑stakeholder dialogue and coordinated oversight as essential to effective AI governance (Devroop Dhar)
Shelar highlights capacity building as a pillar of his framework, Pardeshi describes AI university courses and online training for civil servants, Kapoor points to the low skill levels of the workforce and the need for massive upskilling, while Dhar underscores the need for ongoing multi-stakeholder engagement to sustain these efforts [57-58][85-86][270-274][75].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple policy briefs stress that capacity building at technical and policy levels is a prerequisite for AI adoption in the public sector, reflected in UN-DPF recommendations and capacity-building programmes highlighted in several forums [S64][S65][S52][S66].
AI should be deployed to improve public service delivery and enable smart, predictive governance.
Speakers: Mr. Virendra Singh, Mr. Ashish Shelar, Mr. Suresh Sethi, Mr. Yashasvi Yadav
Intelligent governance principle … decision intelligence, public service delivery at scale (Virendra Singh) Deployment of AI‑powered Mahak Crime OS and the Mahaiti cloud‑native infrastructure to integrate services and enable real‑time decision making (Ashish Shelar) Use of AI over population‑scale Digital Public Infrastructure to enable dynamic eligibility, predictive subsidies, and reduction of inclusion/exclusion errors (Suresh Sethi) Maharashtra Cyber Security Project using AI for dark‑web monitoring, threat analysis, and rapid response, resulting in large fraud recoveries and lives saved (Yashasvi Yadav)
All four speakers illustrate concrete AI applications that enhance governance – from Singh’s decision-intelligence vision, Shelar’s crime-OS and cloud platform, Sethi’s AI-layered DPI for dynamic benefits, to Yadav’s AI-driven cyber-security helpline that has recovered fraud money and saved lives [5][47-50][166-190][121-136].
POLICY CONTEXT (KNOWLEDGE BASE)
This mirrors the consensus that AI can transform public services, enabling personalized, efficient and predictive delivery, as emphasized in citizen-centric AI discussions and case studies of digital public infrastructure for agriculture [S46][S47][S48][S54].
AI’s borderless nature requires international cooperation, interoperable standards, and partnerships with global technology firms.
Speakers: Mr. Virendra Singh, Mr. Ashish Shelar, Mr. Ranjeet Goswami
Because AI does not recognize borders, interoperable frameworks, shared safety standards, and cooperative oversight are required (Virendra Singh) Partnerships with global technology leaders such as Microsoft to accelerate AI adoption in public safety and governance (Ashish Shelar) Advocacy for large tech firms (e.g., TCS) to work holistically with government, sharing common data platforms and embedding intelligence across all departments (Ranjeet Goswami)
Singh stresses the need for global, interoperable frameworks, Shelar cites a concrete partnership with Microsoft, and Goswami calls for holistic collaboration with large tech companies, all reflecting consensus on the necessity of cross-border cooperation for AI governance [14-16][18-20][47][200-217].
POLICY CONTEXT (KNOWLEDGE BASE)
The borderless character of AI is repeatedly cited as a driver for international standards, interoperable frameworks and public-private partnerships in global AI governance dialogues [S60][S61][S62][S59][S68].
Ethical safeguards, explainability, and protection of human rights are central to AI deployment.
Speakers: Mr. Virendra Singh, Mr. Suresh Sethi, Ms. Beena Sarkar
Intelligent governance principle … human‑centered design, transparency and accountability (Virendra Singh) Call for explainable, auditable AI with human redress mechanisms to prevent inclusion/exclusion errors (Suresh Sethi) Advocacy for ethical AI safeguards, gender‑bias assessment, and a safety institute to vet new devices (Beena Sarkar)
Singh’s call for human-centred, transparent AI, Sethi’s demand for explainable and auditable systems with human redress, and Sarkar’s push for gender-bias checks and a dedicated safety institute all converge on the need for ethical, rights-respecting AI frameworks [13][184-190][221-259].
POLICY CONTEXT (KNOWLEDGE BASE)
Ethical safeguards, explainability and human-rights protection are core pillars of AI policy roadmaps and governance frameworks discussed in multistakeholder forums and AI ethics declarations [S47][S48][S67][S51].
Similar Viewpoints
Both stress that AI systems must be transparent, accountable and include human oversight to protect citizens from erroneous or biased outcomes [13][184-190].
Speakers: Mr. Virendra Singh, Mr. Suresh Sethi
Intelligent governance principle emphasizing human‑centered design, transparency, risk‑based regulation, and global cooperation (Virendra Singh) Call for explainable, auditable AI with human redress mechanisms to prevent inclusion/exclusion errors (Suresh Sethi)
Both advocate for structured, multi‑layered partnerships between government and major technology providers to build a unified, intelligent governance stack [57-58][200-217].
Speakers: Mr. Ashish Shelar, Mr. Ranjeet Goswami
Five‑pillar model for safe, secure, smart governance … (Ashish Shelar) Advocacy for large tech firms to work holistically with government, sharing common data platforms and embedding intelligence (Ranjeet Goswami)
Both highlight that without adequate infrastructure—whether energy, digital skills, or broadband—AI initiatives cannot achieve their intended societal impact [85-86][270-274].
Speakers: Mr. Praveen Pardeshi, Dr. Amit Kapoor
Investment in green energy and AI‑focused capacity building for government staff; creation of “Maha GPT” (Praveen Pardeshi) Call for extensive education, upskilling, and affordable broadband to ensure equitable AI benefits (Amit Kapoor)
Both warn that AI, if left unchecked, poses significant societal risks—Yadav focusing on cyber‑security threats and Kapoor on mental‑health and misinformation—underscoring the need for safeguards [121-136][300-304].
Speakers: Mr. Yashasvi Yadav, Dr. Amit Kapoor
Maharashtra Cyber Security Project … AI‑driven threat‑intelligence tools (Yashasvi Yadav) Unchecked AI can become a societal dumping ground, fostering harmful content consumption and mental‑health risks (Amit Kapoor)
Unexpected Consensus
Treating public data as a strategic asset for governance and economic value.
Speakers: Mr. Praveen Pardeshi, Mr. Suresh Sethi
Establishment of a State Data Authority to monetize public data, e.g., health records, while protecting national interests (Praveen Pardeshi) Use of AI over population‑scale Digital Public Infrastructure to enable dynamic eligibility, predictive subsidies, and reduction of inclusion/exclusion errors (Suresh Sethi)
While Pardeshi focuses on creating a commercial data authority and Sethi on leveraging DPI for precise benefit delivery, both converge on the view that public data should be systematically harnessed as a valuable, governed resource rather than an uncontrolled by-product [98-102][166-190].
POLICY CONTEXT (KNOWLEDGE BASE)
Recognising data as a strategic public asset and balancing its economic potential with stewardship and protection is reflected in data-for-development policies and debates on data monetisation versus open-data safeguards [S43][S49][S50][S45].
Overall Assessment

The panel shows strong convergence on four core themes: (1) the need for an intelligent, human‑centred AI governance framework with transparency, accountability and global cooperation; (2) extensive capacity building and skill development as prerequisites for AI rollout; (3) concrete AI applications that enhance public services, from crime prevention to predictive welfare delivery; (4) ethical safeguards, explainability and protection of human rights. These shared positions cut across government, industry, and academia, indicating a high level of consensus.

High consensus – the repeated alignment across speakers suggests that future policy initiatives are likely to prioritize comprehensive governance structures, capacity development, and ethical safeguards, facilitating coordinated action at state, national, and international levels.

Differences
Different Viewpoints
Level of governance focus – global cooperative frameworks versus state‑centric implementation
Speakers: Mr. Virendra Singh, Mr. Ashish Shelar, Mr. Praveen Pardeshi, Mr. Yashasvi Yadav, Mr. Ranjeet Goswami
Intelligent governance principle emphasizing human‑centered design, transparency, risk‑based regulation, and global cooperation (Virendra Singh) Five‑pillar model for safe, secure, smart governance: compute, public data, state AI governance, standards, capacity building (Ashish Shelar) Investment in green energy and AI‑focused capacity building for government staff; creation of “Maha GPT” to parse complex regulations (Praveen Pardeshi) Maharashtra Cyber Security Project using AI for dark‑web monitoring, threat analysis, and rapid response, resulting in large fraud recoveries and lives saved (Yashasvi Yadav) Push for a unified citizen database (Aadhaar integration) across departments to avoid siloed operations (Ranjeet Goswami)
Singh stresses that AI’s borderless nature requires interoperable global frameworks and cooperative oversight [14-16][18-20], while Shelar, Pardeshi, Yadav and Goswami focus on state-level solutions such as Maharashtra’s five-pillar stack, green-energy-backed AI capacity building, a state-run cyber security project and a unified Aadhaar database [57-58][81-84][85-86][121-136][210-215]. This creates a tension between a global-cooperative vision and a state-centric implementation approach.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between global cooperative AI governance and state-centric implementation is highlighted in analyses of multilateral AI policy approaches that stress inclusive, cross-border frameworks while acknowledging national sovereignty concerns [S61][S68][S62].
Assessment of Maharashtra’s digital infrastructure and connectivity
Speakers: Mr. Ashish Shelar, Dr. Amit Kapoor
Deployment of AI‑powered Mahak Crime OS and the Mahaiti cloud‑native infrastructure to integrate services and enable real‑time decision making (Ashish Shelar) Leveraging AI to address nutrition, water, sanitation, and education challenges in Tier‑2 and Tier‑3 cities; highlighting gaps in internet connectivity and skill development (Amit Kapoor)
Shelar presents Maharashtra as having a sophisticated cloud-native platform that already powers real-time services across domains [48-50], whereas Kapoor points out that even within the state broadband speeds average only 58 Mbps and that skill levels are low, limiting AI’s impact in Tier-2/3 areas [279-283][270-274][292-300]. The two speakers therefore disagree on the current state of digital infrastructure in Maharashtra.
POLICY CONTEXT (KNOWLEDGE BASE)
While specific state-level data is scarce, broader assessments of India’s digital infrastructure underline challenges of connectivity and readiness that inform sub-national evaluations such as Maharashtra’s [S55].
Approach to public data – monetisation versus strict protection and auditability
Speakers: Mr. Praveen Pardeshi, Mr. Suresh Sethi
Establishment of a State Data Authority to monetise public data, e.g., health records, while protecting national interests (Praveen Pardeshi) Call for explainable, auditable AI with human redress mechanisms to prevent inclusion/exclusion errors (Suresh Sethi)
Pardeshi proposes turning public health and other datasets into commercial assets through a State Data Authority, arguing this safeguards national interests while generating revenue [98-102]. Sethi, by contrast, stresses the need for explainability, audit trails and human redress to avoid leakage or exclusion in benefit delivery, focusing on protecting individuals rather than commercialising data [184-190]. This reflects a disagreement on whether public data should be primarily a revenue source or a tightly guarded public good.
POLICY CONTEXT (KNOWLEDGE BASE)
Ongoing policy debates contrast data monetisation models that favour large actors with calls for strict protection, auditability and open-data principles, as discussed in data governance literature [S49][S50][S43].
Unexpected Differences
Contrasting claims about Maharashtra’s digital readiness
Speakers: Mr. Ashish Shelar, Dr. Amit Kapoor
Deployment of AI‑powered Mahak Crime OS and the Mahaiti cloud‑native infrastructure to integrate services and enable real‑time decision making (Ashish Shelar) Leveraging AI to address nutrition, water, sanitation, and education challenges in Tier‑2 and Tier‑3 cities; highlighting gaps in internet connectivity and skill development (Amit Kapoor)
Shelar portrays Maharashtra as already possessing a sophisticated, real-time AI-enabled infrastructure [48-50], while Kapoor points out that even within Maharashtra broadband speeds are modest (≈58 Mbps) and that skill levels are low, limiting AI’s reach in less-urban areas [279-283][270-274]. The starkly different assessments of the state’s digital readiness were not anticipated given they both represent the same regional government.
POLICY CONTEXT (KNOWLEDGE BASE)
Divergent assessments of digital readiness at sub-national level echo broader discussions on digital divide and readiness metrics in India, as highlighted in regional AI-ready infrastructure reports [S55].
Quantum computing risk raised only by Yashasvi Yadav
Speakers: Mr. Yashasvi Yadav, All other speakers
Warning that quantum computing could break current encryption standards (RSA, blockchain) and destabilize financial systems, urging urgent preparedness (Yashasvi Yadav) No other speaker addresses quantum computing or its systemic risks
Yadav uniquely foregrounds quantum computing as an imminent threat to encryption and financial stability [145-150], a topic absent from the rest of the discussion, creating an unexpected divergence in risk focus.
POLICY CONTEXT (KNOWLEDGE BASE)
Concerns about quantum computing’s impact on cybersecurity and AI safety have been raised in UN Security Council briefings and expert panels, underscoring the relevance of the risk highlighted [S56][S57].
Overall Assessment

The panel shows broad consensus on the need for AI‑enabled governance, capacity building and human‑centered oversight. However, substantive disagreements emerge around the appropriate scale of governance (global vs state), the actual state of digital infrastructure in Maharashtra, and the treatment of public data (commercialisation versus strict protection). These divergences reflect differing priorities—strategic sovereignty versus rapid deployment, optimism about existing infrastructure versus caution about connectivity gaps, and revenue generation versus privacy safeguards.

Moderate to high. While participants share common goals, the contrasting visions on governance scope, infrastructure readiness, and data policy could impede coordinated action unless reconciled. The implications are that policy formulation will need to balance global cooperation with state autonomy, invest in genuine connectivity upgrades, and establish clear rules on data monetisation to maintain public trust.

Partial Agreements
All three speakers agree that AI governance must embed transparency, accountability and human oversight, but Singh stresses global cooperation while Shelar focuses on a state‑level five‑pillar stack and Sethi highlights auditability and redress at the implementation level [11-13,57-58,184-190].
Speakers: Mr. Virendra Singh, Mr. Ashish Shelar, Mr. Suresh Sethi
Intelligent governance principle emphasizing human‑centered design, transparency, risk‑based regulation, and global cooperation (Virendra Singh) Five‑pillar model for safe, secure, smart governance: compute, public data, state AI governance, standards, capacity building (Ashish Shelar) Call for explainable, auditable AI with human redress mechanisms to prevent inclusion/exclusion errors (Suresh Sethi)
All three stress the importance of building capacity for AI—Singh through risk‑based regulation and human‑centered design, Shelar via a dedicated capacity‑building pillar, and Pardeshi via AI university courses and the Maha GPT tool—but differ on the primary vehicle (regulatory design, pillar framework, or energy‑linked training) [11-13,57-58,81-86,108-114].
Speakers: Mr. Virendra Singh, Mr. Ashish Shelar, Mr. Praveen Pardeshi
Intelligent governance principle emphasizing human‑centered design, transparency, risk‑based regulation, and global cooperation (Virendra Singh) Five‑pillar model for safe, secure, smart governance: compute, public data, state AI governance, standards, capacity building (Ashish Shelar) Investment in green energy and AI‑focused capacity building for government staff; creation of “Maha GPT” to parse complex regulations (Praveen Pardeshi)
Both the moderator and Dhar underline that inclusive, multi‑stakeholder dialogue is a cornerstone of AI governance, reinforcing the same principle throughout the session [34-35,68-73,115-116].
Speakers: Moderator, Mr. Devroop Dhar
Facilitating inclusive, multi‑stakeholder dialogue is essential for effective AI governance (Moderator) Emphasis on multi‑stakeholder dialogue and coordinated oversight as essential to effective AI governance (Devroop Dhar)
Takeaways
Key takeaways
AI governance must be intelligent, human‑centered, transparent, accountable, risk‑based and globally coordinated (Virendra Singh). Maharashtra is building a five‑pillar AI infrastructure: compute/cloud, high‑quality public data, state AI governance, standards/interoperability, and capacity building (Ashish Shelar). AI can transform public service delivery, law enforcement, cyber security and predictive governance when layered on population‑scale Digital Public Infrastructure (Suresh Sethi, Yashasvi Yadav). Capacity building for government staff and creation of AI‑focused educational resources (e.g., AI University, online courses) are essential (Praveen Pardeshi). Ethical safeguards, gender‑bias assessment and a dedicated safety institute are needed to vet new AI‑enabled devices and applications (Beena Sarkar). Quantum computing poses a near‑term risk to current encryption and financial stability; urgent preparedness is required (Yashasvi Yadav). Collaboration between tech firms, academia and government (e.g., Microsoft, TCS, IIT Mumbai) is critical for interoperable data platforms and AI deployment (Ashish Shelar, Ranjit Goswami). Infrastructure gaps—especially broadband quality, affordable connectivity and skill deficits in Tier‑2/3 cities—must be addressed to ensure equitable AI benefits (Amit Kapoor).
Resolutions and action items
Launch and pilot “Maha GPT” to provide AI‑driven query access to complex government orders for officials and citizens. Operationalise the State Data Authority to create a single‑source proof of public data (e.g., health records) and develop monetisation mechanisms that retain value for India. Scale green energy projects (≈19,000 MW solar) to meet AI compute demand. Expand AI capacity‑building programmes for government employees via AI University and IGOT online courses. Integrate Aadhaar‑based unified citizen identity across all state departments to eliminate data silos (TCS recommendation). Establish the India Safety Institute (or similar body) to evaluate new AI‑enabled hardware/devices for gender‑bias and safety before market entry. Adopt explainable, auditable AI models with human redress mechanisms for subsidy eligibility and benefit delivery. Strengthen cyber‑security operations with AI threat‑intelligence tools (Luminar, Cognite, Pathfinder) and maintain the 1930 helpline for rapid response. Develop a roadmap for quantum‑ready cryptographic standards and invest in quantum research capabilities.
Unresolved issues
Specific timeline and funding commitments for the quantum‑computing preparedness roadmap. Detailed plan for upgrading broadband speed and coverage in Tier‑2 and Tier‑3 cities; no concrete targets were set. Mechanisms for international coordination on AI standards and interoperability beyond statements of intent. How to ensure that monetisation of public data does not compromise privacy or lead to exploitation by private entities. Concrete metrics for measuring inclusion/exclusion errors and the effectiveness of AI‑driven predictive subsidies. Governance structure for the proposed safety institute and its authority over private sector device launches.
Suggested compromises
Adopt “intelligent governance” that balances regulation speed with innovation, avoiding both over‑regulation (stagnation) and under‑regulation (harm). Implement risk‑based, adaptive regulations rather than static, one‑size‑fits‑all policies. Combine AI automation with human oversight and redress pathways to maintain accountability while leveraging efficiency. Encourage voluntary industry standards and collaborative oversight mechanisms rather than imposing unilateral mandates. Use AI to enhance empathy and inclusivity in governance, ensuring technology serves citizens rather than distancing the state.
Thought Provoking Comments
Artificial intelligence is real and it is influencing governance, markets, public services and even geopolitics… The governance challenge becomes uniquely complex as AI introduces speed, opacity, concentration, global reach and dual use. This creates a governance paradox: Regulate too slowly and risk harm; Regulate too heavily and risk stagnation. The answer is intelligent governance with human‑centered design, transparency, risk‑based regulation and global cooperation.
Sets the conceptual frame for the entire panel, highlighting the tension between rapid AI advancement and the need for nuanced, adaptive regulation. It moves the debate from ‘whether AI will affect governance’ to ‘how governance must evolve with AI.’
All subsequent speakers referenced the need for balanced, adaptive policies. It prompted Ashish Shelar to present Maharashtra’s ‘living laboratory’ approach and led others to discuss concrete mechanisms (e.g., data sovereignty, explainability) that operationalise Singh’s governance paradox.
Speaker: Mr. Virendra Singh
Maharashtra has positioned itself as a living laboratory for AI in governance… Our partnership with global technology leaders, for example, our AI‑powered Mahak Crime OS… and we are building an intelligent government infrastructure – a cloud‑native, modular, API‑driven backbone that uses AI to integrate services, predict needs, and respond in real‑time.
Translates the abstract governance principles into a concrete, state‑level strategy, introducing the five‑pillar framework (compute, data, governance, standards, capacity building) and showcasing real‑world pilots.
Provided a tangible reference point for the panel, prompting Praveen Pardeshi to discuss capacity‑building and data monetisation, and Suresh Sethi to talk about embedding AI over existing digital infrastructure.
Speaker: Mr. Ashish Shelar
We are pushing on green energy, more than 19,000 MW of solar… capacity building through an AI university for government staff… and we are developing ‘Maha GPT’ – a small language model that can disentangle 150,000+ government orders so both officers and citizens can query the latest position on permits, court orders, etc.
Introduces two novel ideas: (1) linking AI’s energy needs to renewable expansion, and (2) treating government directives as a data product that can be queried via a specialised LLM, shifting the conversation from AI as a tool to AI as an information‑access layer.
Shifted the discussion toward practical implementation challenges (energy, data ownership) and inspired Suresh Sethi’s remarks on dynamic eligibility and explainable AI, as well as Yashasvi Yadav’s focus on security of such data.
Speaker: Mr. Praveen Pardeshi
Quantum computing can break the best of encryptions including RSA, blockchain and banking systems… China has invested $15‑20 bn while we have invested only $1 bn. We must prepare for the pros and cons before it becomes a security nightmare.
Expands the security conversation beyond current AI‑driven cyber tools to a looming, disruptive technology, highlighting a strategic gap in national preparedness.
Prompted the moderator to invite Dr. Anupam for a quantum‑AI perspective, and reinforced the urgency expressed by other panelists about robust cyber‑security, data sovereignty, and the need for forward‑looking policy.
Speaker: Mr. Yashasvi Yadav
Dynamic eligibility and predictive governance: with machine‑readable verifiable credentials AI can determine who is eligible for subsidies in real time, moving from static identity to proactive benefit delivery. But this requires explainable, auditable AI and a human redress pathway.
Bridges the gap between high‑level policy and operational detail, introducing the concept of AI‑driven predictive welfare while insisting on accountability mechanisms.
Deepened the technical discussion, leading Ranjit Goswami to stress common databases (Aadhaar) and Beena Sarkar to raise ethical concerns about how such predictive systems could embed bias.
Speaker: Mr. Suresh Sethi
When we talk about ethics we must ask: does a new device threaten 50 % of the population? Smart glasses were recalled for privacy violations; similar hardware can be weaponised against women. We need a safety institute to vet technologies before they reach the market – a ‘Kali versus Rakta Bija’ assessment.
Brings gender‑focused ethical scrutiny to AI hardware, moving the conversation from algorithmic fairness to real‑world safety impacts, and introduces a vivid metaphor that captures the stakes.
Shifted the tone toward societal impact, prompting Amit Kapoor to discuss broader social harms (digital addiction, under‑employment) and reinforcing the need for human‑centric safeguards.
Speaker: Ms. Beena Sarkar
AI is a double‑edged sword: while it can boost nutrition monitoring, water‑sanitation, and education in Tier‑2/3 cities, it also risks becoming a ‘dumping ground’ that degrades cognition (doom‑scrolling, AI‑generated content). We must address skill gaps, internet connectivity, and affordability before AI widens inequality.
Offers a critical, holistic view that balances optimism with caution, highlighting systemic bottlenecks (skills, infrastructure) and societal risks, thereby reframing the discussion from technology deployment to inclusive capacity building.
Served as a concluding counter‑balance, prompting the panel to acknowledge the need for broader socio‑economic reforms and reinforcing earlier points about capacity building and ethical oversight.
Speaker: Dr. Amit Kapoor
Overall Assessment

The discussion was shaped by a series of escalating insights that moved from abstract governance dilemmas to concrete state initiatives, technical implementations, security foresight, ethical safeguards, and socio‑economic realities. Mr. Virendra Singh’s framing of the ‘governance paradox’ set the agenda, while Ashish Shelar’s living‑lab example gave it substance. Praveen Pardeshi’s focus on energy, capacity building, and a specialised LLM introduced practical challenges, which Suresh Sethi deepened with the notion of dynamic, explainable welfare delivery. Yashasvi Yadav’s quantum warning broadened the security horizon, prompting forward‑looking considerations. Beena Sarkar’s gender‑centric ethical critique and Amit Kapoor’s systemic critique of skill gaps and digital inequality added layers of social responsibility. Together, these pivotal comments redirected the conversation repeatedly—first toward policy design, then implementation, then risk management, and finally toward inclusive, human‑centered outcomes—ensuring the panel moved beyond hype to a nuanced, multi‑dimensional dialogue on AI governance.

Follow-up Questions
How can the state effectively monetize and protect large‑scale public data (e.g., health data) while ensuring it benefits India rather than foreign entities?
Pardeshi highlighted the need to ‘encash’ data at scale and warned that without safeguards India could become a target for external monetisation of its data.
Speaker: Praveen Pardeshi
What mechanisms are needed to ensure AI‑driven decision‑making (e.g., eligibility for subsidies) is explainable, auditable, and includes human redressal?
Sethi stressed that AI decisions must be transparent, auditable and have a human redress pathway to avoid inclusion/exclusion errors in welfare delivery.
Speaker: Suresh Sethi
How should India prepare for the security implications of quantum computing, including potential threats to encryption and financial systems?
Yadav warned that quantum computing could break current encryption standards and called for research and preparedness to mitigate this emerging risk.
Speaker: Yashasvi Yadav
What research and policy frameworks are required to assess and mitigate gender bias and safety concerns in emerging AI‑enabled hardware (e.g., smart glasses) for women?
Sarkar pointed out that devices like smart glasses raise privacy and safety issues for women and advocated for an ethical evaluation institute to vet new technologies.
Speaker: Beena Sarkar
How can Maharashtra bridge the skill gap, especially in Tier 2 and Tier 3 cities, to ensure the workforce can effectively use AI technologies?
Kapoor highlighted that only ~20 % of the state’s workforce has advanced skills, stressing the need for large‑scale skill‑development and education reforms.
Speaker: Amit Kapoor
What investments and strategies are needed to improve broadband connectivity and digital infrastructure to support AI adoption across Maharashtra?
Kapoor noted inadequate internet speeds and infrastructure as a bottleneck for AI diffusion, calling for rapid expansion of high‑speed broadband and data‑center capacity.
Speaker: Amit Kapoor
How can AI be leveraged to address nutrition, water, sanitation, and education challenges at the micro (PIN‑code) level in Maharashtra?
Kapoor suggested using AI analytics to pinpoint malnutrition, water‑sanitation gaps and educational needs at granular geographic levels.
Speaker: Amit Kapoor
What are the potential societal risks of AI‑driven content consumption (e.g., doom‑scrolling) on mental health, especially among youth, and how can they be mitigated?
Kapoor warned that AI‑generated content could exacerbate mental‑health issues and called for research into safeguards and digital‑wellness interventions.
Speaker: Amit Kapoor
How can inter‑departmental data sharing (e.g., linking Aadhaar with all state departments) be implemented securely and efficiently to enable unified citizen services?
Goswami emphasized the need for a common citizen database across departments, noting current fragmentation and the importance of secure integration.
Speaker: Ranjit Goswami
What standards and governance mechanisms are needed to ensure AI systems used in law enforcement and cyber security are transparent, accountable, and respect privacy?
Yadav described AI‑enabled cyber‑security operations and called for robust governance, auditability and privacy safeguards.
Speaker: Yashasvi Yadav
How can the state develop a robust, scalable AI training ecosystem (e.g., AI University, online courses) for government employees to ensure capacity building?
Pardeshi mentioned existing AI university courses and the need for wider online training (IGOT) to empower civil servants.
Speaker: Praveen Pardeshi
What are the best practices for creating a small language model (Maha GPT) that can serve both officials and citizens while maintaining data security and accuracy?
Pardeshi introduced ‘Maha GPT’ and implied the need for research on model design, privacy, and dual‑audience usability.
Speaker: Praveen Pardeshi
How can AI be used to predict and prevent financial fraud and cyber attacks from nation‑state actors, especially in the context of emerging threats like quantum computing?
Yadav referenced large‑scale cyber‑attacks during the Pahalgam incident and highlighted the need for AI‑driven threat‑intelligence that anticipates quantum‑enabled threats.
Speaker: Yashasvi Yadav
What governance frameworks are needed to manage the dual‑use nature of AI (civilian vs. military) and its rapid, opaque development?
Singh warned of AI’s speed, opacity and dual‑use characteristics, calling for adaptive, globally coordinated governance to avoid stagnation or harm.
Speaker: Virendra Singh

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How Small AI Solutions Are Creating Big Social Change

How Small AI Solutions Are Creating Big Social Change

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel explored how “small AI”-models that are data-efficient, inexpensive to run, and tailored to local contexts-can generate large social benefits, especially for underserved communities in the global south [15-19].


Zameer Brey emphasized that AI must be evaluated for its real-world fit, asking whether a model works in a district hospital in Telangana or for a farmer in Zambia, and argued for designing solutions that are “smaller, faster, sharper, cost-effective” rather than overly large like an airplane in Delhi traffic [29-34].


Aisha Walcott-Bryant described Google Research Africa’s “Africa for Africa” approach, which starts with a problem-first mindset, builds better weather-forecasting tools despite having only 37 radar stations compared with 300 in Europe, and creates open voice datasets for 27 African languages to enable edge-ready models on laptops and tablets [38-58][60-65].


Wassim Hamidouche highlighted the AI for Good Lab’s open-source small AI projects, such as SPARO for biodiversity monitoring in remote areas and Alert California’s network of 1,300 cameras for early wildfire detection, both designed to run on low-resource hardware and be deployable worldwide [81-98].


Antoine Tesnière noted that many healthcare applications already rely on validated small AI models for image analysis in radiology, dermatology and ophthalmology, which can operate offline on modest devices and support precision medicine despite limited data [102-106][278-306].


Illango Patchamuthu explained that the World Bank treats AI as a tool to reduce poverty, prioritizing simple, scalable small AI pilots that can be replicated across villages and emphasizing that small AI is not “second class” but capable of fast-tracking development outcomes [111-124].


Wassim also outlined the major obstacles for low-resource languages, including the dominance of English on the internet, lack of benchmarks, performance gaps and safety alignment, and announced initiatives such as Lingua Europe and Lingua Africa to fund data collection for dozens of languages [181-210][210-229].


He further recommended shifting from generic data gathering to domain-specific, use-case-driven data collection to improve reliability and enable verifiable “glass-box” models that can assist community health workers on low-cost smartphones [233-238][153-175].


The discussion underscored the importance of open-access resources, with the World Bank hosting an AI Repository containing about 100 small-AI use cases for health, education and agriculture that will remain publicly available [268-272].


Audience questions reinforced the consensus that collaboration among tech firms, multilateral institutions and local partners is essential to build trustworthy, efficient small AI that can operate with patchy connectivity and limited compute [146-148][269-272].


Across the panel, participants agreed that small AI, when designed with local relevance, open data and robust evaluation, can bridge gaps left by large foundation models and deliver tangible benefits to rural and low-income populations [15-19][111-124][81-98].


The session concluded that continued investment in domain-specific data, open-source tools and ecosystem support will be critical for scaling small AI’s impact worldwide [233-238][181-229].


Keypoints


Major discussion points


Defining “small AI” and its relevance for underserved communities – The moderator frames “small AI” as models that are data-efficient, cheap to run, edge-deployable and locally meaningful, contrasting them with large foundation models aimed at a Global-North audience [15-19]. Zameer reinforces the idea with a traffic-analogy, arguing that solutions should be “smaller, faster, sharper, cost-effective” for the contexts they serve [32-34].


Concrete small-AI projects across sectors


Google Research Africa showcases low-resource weather-forecasting tools and an open voice-dataset for African languages, emphasizing partnership-led, open-weight models that can run on laptops or tablets [50-65][144-147].


Microsoft’s AI for Good Lab presents SPARO (solar-powered acoustic biodiversity monitoring) and Alert California (camera network for early wildfire detection), both open-source and globally deployable [82-98].


Healthcare examples include validated small models for radiology, dermatology and ophthalmology in France, and a community-health-worker scenario where a tiny smartphone model could have saved a mother’s life [102-110][166-174].


Technical and ethical challenges of low-resource languages and domain-specific models – Wassim outlines four hurdles: dominance of English in training data, lack of benchmarks, performance gaps, and safety/alignment issues for low-resource languages [181-200]. He later stresses the need to shift from generic data collection to domain-specific, use-case-driven datasets to improve reliability [233-238].


Scaling small-AI for development goals – The World Bank stresses that AI must be a means to reduce poverty, requiring simple, low-compute solutions that can be replicated from pilot villages to larger populations [112-120]. It also highlights the creation of a public “AI Repository” of ~100 use-cases and the importance of ecosystem building, local private-sector participation, and job creation [243-262][258-262].


Collaboration, open data and community ownership – Multiple speakers note that partnerships (e.g., Google with African universities, Microsoft with Gates Foundation, World Bank with multilateral banks) and open-source releases (datasets, models, Lingua Africa initiative) are essential to ensure models are trustworthy, locally adapted, and sustainably maintained [60-65][219-228][258-262].


Overall purpose / goal


The panel “Small AI for Big Social Impact” was convened to share how data-efficient, context-aware AI-often built by smaller teams or for specific domains-can be designed, deployed, and scaled to address concrete challenges in health, agriculture, climate, and language preservation, especially in low-resource settings of the Global South. Panelists were asked to describe their organizations’ work, discuss the role of non-foundation models, and explore pathways for broader adoption and impact.


Tone of the discussion


The conversation began formally but quickly shifted to an enthusiastic, solution-focused tone, with speakers using analogies and storytelling to illustrate impact. As technical details emerged (language-model challenges, safety, benchmarking), the tone became more analytical yet remained collaborative. Throughout, optimism about the potential of small AI was balanced by caution regarding reliability and the need for community trust, ending on a collegial, hopeful note as the session closed.


Speakers

Announcer


– Role/Title: Event announcer/moderator


– Areas of Expertise: –


– Affiliation: –


– Source: [S10]


Alpan Rawal


– Role/Title: Chief AI / ML Scientist, Wadwani AI; Moderator of the panel


– Areas of Expertise: AI research, AI for social impact, moderation


– Affiliation: Wadwani AI


Aisha Walcott-Bryant


– Role/Title: Senior Staff Research Scientist and Head of Google Research Africa


– Areas of Expertise: AI research, weather nowcasting, African language datasets, accessibility, AI for development


– Affiliation: Google Research Africa


– Sources: [S1][S2][S3]


Zameer Brey


– Role/Title: –


– Areas of Expertise: AI for inequality reduction, context-aware AI deployment, small-scale AI solutions


– Affiliation: –


– Sources: [S6][S7]


Illango Patchamuthu


– Role/Title: World Bank Group Director of Strategy and Operations, Digital & AI Vice-Presidency; Acting Director for Data and AI


– Areas of Expertise: International development, AI for poverty reduction, AI policy, AI use-case repository


– Affiliation: World Bank Group


– Sources: [S8][S9]


Antoine Tesniere


– Role/Title: Professor of Medicine and Entrepreneur; Anesthesiologist at Georges Pompidou European Hospital; Co-founder of ILEMENTS; Director, Paris-Saint-Denis Campus


– Areas of Expertise: Health innovation, medical AI, small-model deployment in healthcare, precision medicine


– Affiliation: Georges Pompidou European Hospital, ILEMENTS, Paris-Saint-Denis Campus


– Sources: [S13][S14][S15]


Wassim Hamidouche


– Role/Title: Principal Research Scientist, AI for Good Lab, Microsoft


– Areas of Expertise: Computer vision, NLP, multimodal AI, low-resource language models, AI for Good initiatives


– Affiliation: Microsoft AI for Good Lab


– Sources: [S19][S20]


Audience


– Role/Title: Various audience members (questions from Irish Kumar, Selena, Dr. Ravi Singh, etc.)


– Areas of Expertise: –


– Affiliation: –


– Sources: [S16][S17][S18]


Additional speakers:


Neha Butts – Associate Director, Human Resources (handed out mementos at the close of the session)


Selena – CEO and Co-founder, Zindi (asked a technical question about open-weight models)


Dr. Ravi Singh – (identified as a doctor from Miami, asked a “AI wars” question)


Irish Kumar – Representative of CSC Winnie Ocean Center (asked about AI capacity for youth and agriculture)


No external source citations were available for the additional speakers; information is taken from the transcript.


Full session reportComprehensive analysis and detailed insights

The session opened with moderator Alpan Rawal framing the panel theme as “small AI for big social impact”. He defined small AI as data-efficient, inexpensive, edge-ready models that are built for the specific local contexts of underserved communities rather than for a generic Global-North audience [15-19].


Zameer Brey reinforced this definition, asking whether a model would work in a district hospital in Telangana, for a farmer in Zambia, or in a classroom in rural Senegal, and warned against “designing solutions as oversized as an aeroplane for Delhi traffic”. He advocated “smaller, faster, sharper, cost-effective” tools that move users from point A to point B [29-34].


Aisha Walcott-Bryant outlined Google Research Africa’s “Africa for Africa” philosophy, which starts with a problem-first mindset: if a simple binary solution exists, AI is unnecessary [44-47]. She highlighted two projects. First, a continent-wide weather-forecasting system built despite Africa having only 37 radar stations (versus ~300 in Europe), demonstrating innovation under severe data constraints [54-58]. Second, Google released an open voice dataset covering 27 African languages (out of ~2 000) and open-weight models such as Gemma that can run on laptops and tablets [60-65][144-147].


Wassim Hamidouche described Microsoft’s AI for Good Lab, which pursues real-world impact through open-source, low-resource solutions and a collaborative model with NGOs, academia and local governments [81-86]. He presented SPARO, a solar-powered acoustic and remote-recording system that uses cameras with an HAA model, satellite links, and runs in several pilot countries [82-86]. He also detailed Alert California, a network of 1 300 cameras operating 24 × 7 with AI-enabled early-fire detection to enable rapid emergency response [91-98]. Both projects are deliberately built for modest hardware and released openly for global adoption [81-98]. He added technical specifics of Microsoft’s models (4 billion-15 billion parameters) and noted a 12 % balanced performance gain after continual pre-training and instruction fine-tuning on low-resource languages [150-155].


Antoine Tesnière illustrated that many healthcare applications already rely on validated small-AI models. He cited radiology, dermatology and ophthalmology tools that run on inexpensive edge devices, operate offline, and follow the principle “AI provides information, not decisions” [102-106][278-306][300-306][333-339]. He placed his work in the broader context of the European Health Data Space, which covers 450 million citizens across 27 countries [310-313]. He emphasized that AI should augment, not replace, clinicians, delivering transparent decision-support even when data are scarce [304-306][333-339].


Illango Patchamuthu (World Bank) positioned AI as a means to achieve development goals-poverty reduction and job creation-stating that AI solutions must be simple, scalable and replicable from a pilot village to larger populations. He clarified that “small AI is not ‘second class’” and can fast-track outcomes [111-124]. The Bank is compiling an open-access AI Repository with roughly 100 use-cases across health, education and agriculture; the repository will later accept user-submitted cases once legal clearance is obtained [243-262][258-262][269-272][262-264].


Following the opening remarks, the discussion moved to technical challenges for low-resource languages. Wassim identified four major obstacles: English-dominant internet data, scarcity of evaluation benchmarks (only ~300 languages have any), a persistent performance gap between high- and low-resource languages, and safety-alignment work limited to high-resource languages [181-200]. To address these, Microsoft is piloting Inuktitut, Chichewa and Māori and has expanded the Lingua Europe initiative to Africa as Lingua Africa, allocating US $5.5 million (in partnership with the Gates Foundation) to fund data collection for African languages [210-229][210-229].


The panel recommended shifting from generic data collection to domain-specific, use-case-driven datasets to improve reliability and enable “glass-box” models whose decision logic can be audited [233-238][239-242]. Zameer illustrated the stakes with a maternal-health anecdote in which a low-cost smartphone model could have prevented a fatal outcome, underscoring the need for verifiable, near-zero-error models [166-174][160-165]. Wassim added practical guidance for developers of low-resource language models: start from a strong multilingual base model, use multilingual tokenizers, augment with monolingual, bilingual and translated data, and leverage speech-to-text and text-to-speech to compensate for limited textual resources [367-375][376-378].


A clear disagreement emerged around acceptable error rates for health applications. Zameer called for virtually zero-error, verifiable models [160-165], while Antoine argued that models need only outperform current clinical practice, noting that existing systems are rarely 99.999 % accurate yet still provide value [308-311]. Illango added that any failure erodes community trust, reinforcing the need for reliability without insisting on absolute zero error [263-267].


Audience interaction highlighted further policy relevance. A question from the floor asked how AI could empower youth in agriculture and renewable energy; Illango responded that digital literacy, skilling programmes, STEM education and targeted private-sector partnerships are essential to translate small-AI solutions into livelihood opportunities [386-390]. Selena from Zindi asked about open-source LLMs for low-resource languages; Wassim explained that choosing a robust base model, augmenting it with multilingual data, and incorporating speech models are key steps [391-393]. Dr. Ravi Singh queried “which platform will win the AI wars?” prompting a panel discussion that emphasized healthy competition, market pluralism and the importance of open ecosystems rather than a single dominant platform [394-395].


In conclusion, the panel affirmed that small AI-when built with humility, co-creation and open resources-can deliver tangible social benefits in health, agriculture, climate monitoring and language preservation. Ongoing actions include expanding the World Bank’s AI Repository, scaling Microsoft’s Lingua Africa data-collection programme, continuing Google’s release of open-weight models for edge deployment, and fostering ecosystem development through digital literacy, STEM education and private-sector engagement. While challenges remain-particularly around zero-error guarantees, safety alignment for low-resource languages and scaling pilots to national programmes-collaborative, domain-specific and transparent approaches provide a viable pathway to realising the promise of small AI for underserved communities worldwide [15-19][29-34][38-48][50-58][60-65][71-79][82-98][102-106][278-306][111-124][181-200][210-229][233-238][160-165][308-311][263-267][386-393][394-395][367-375][150-155][310-313][262-264].


Session transcriptComplete transcript of the session
Announcer

Please, I would request you to take your seat on the panel. Wassim Hamidouche, who’s a principal research scientist at Microsoft’s AI for Good Lab, specializing in computer vision, NLP, and multimodal AI with a focus on low -resource languages. Requesting you to please take your seat. Illango, who’s a World Bank Group Director of Strategy and Operations in the Digital and AI Vice Presidency and also serving as Acting Director for Data and AI. Requesting you to please join the panel. Thank you. Aisha Walcott, who is a senior staff research scientist and head of Google Research Africa, focused on AI development, addressing the continent’s most pressing challenges. She holds a PhD in electrical engineering and computer science and holds leadership roles in the IEEE Robotics and Automation Society.

Requesting you to please join the panel. Antoine, Antoine Tesniere, who’s a French professor of medicine and entrepreneur, specializing in health innovation and crisis management and an anesthesiologist at the Georges Pompidou European Hospital. He co -founded ILEMENTS, coordinated France’s national COVID response, and since 2021 has served as director of Paris -Saint -Denis Campus. Thank you so much for being here. Requesting you to join the panel. And Dr. Alpan Rawal, who’s chief AI ML scientist at Wadwani AI, will be moderating today’s session. Alpan, requesting. Thank you. handing it over to you

Alpan Rawal

yes thank you everyone for coming requesting those at the back if you could close the door so that we can reduce the noise a little bit it’s full ok great well if you could just calm down a bit and settle down thank you welcome to all our esteemed panelists and our panel the topic of our panel as you know is on small AI for big social impact like to deeply thank our panelists for making it all the way for the summit and making it to this panel so what do we mean by small AI I think different people have different definitions and we are sort of open to how each panelist chooses to interpret small AI When we at Wadwani AI brainstormed about this panel, we thought it would reflect in some ways the ethos of our own work, making models that are data efficient, that are cheap to run, that sit on the edge, and most importantly are meaningful to the communities that we serve, which are underserved communities, mostly in rural India.

But it’s increasingly clear that small AI means a lot more. And I see a lot of people talking about small AI in the summit. More generally, I think it encapsulates any AI that meaningfully impacts individuals while taking into account and respecting their very local context, rather than providing generic outputs. So anything like that could rightly be called small AI, and we’re going to hear from our panelists about their experiences with AI models like that. So with that small introduction, let’s now avoid further ado and speak to our panelists. panelists. So, can you hear me at the back? Yeah, okay. So we can start. I have a common question for every panelist. Each of you represents a different and important aspect of AI work that’s happening outside of the mainstream excitement that focuses on large foundation models for a primarily global north audience.

Can you tell us briefly about your organization’s work and perhaps your thoughts on non -foundation AI models in general? Maybe we can start with Zamir.

Zameer Brey

Thanks Alpan. Yeah. Thanks. Thank you. Thank you. Thank you. we really see the opportunity for AI to reduce inequality and our starting point with AI tools is really does this work for whom where and at what scale so those are some of the departing points for us and so really looking beyond the model against a benchmark but how is this going to work in a district hospital in Telangana or a small older farmer in Zambia or a classroom in rural Senegal and you know part of what we’ve in some ways got caught up with is is the performance of the model on its own. And we’ve forgotten how does this fit into the lives and the context that it operates in.

And in doing so, part of what we need to think about is who’s designing the model and what’s it designed for? And I was thinking about the traffic that we’ve been experiencing the last few days in Delhi. And I thought to myself, would anyone, given the traffic here, design something so big as an aeroplane to try and get across the city? No. Yeah. I think we would design something that’s a lot smaller, faster, sharper, cost -effective, and gets us from point A to point B. Without a first -class airbender. us on

Alpan Rawal

I think that’s a great analogy. Aisha, can you tell us a bit about your work at Google Africa?

Aisha Walcott-Bryant

Yes. Thank you. So I lead our Google Research Africa team. We have two sites, one in Ghana and one in Kenya, so representing East and West. But the work that we do is essentially from Africa for Africa and the world. Much of our work is scaling from the uniqueness of the continent. Turns out that a lot of the challenges are similar, definitely across the global south and generally worldwide. Our work, so kind of leaning into the next part of your question and thinking about how we approach this type of work, it’s very much interesting. It’s very much problem first. I always say if there’s a red button that… that you can press and it’s a one error zero, just build the red button.

We don’t need to bring AI or technology. So it’s really important to be very thoughtful about the type of problem. Coming from Google Research, we want to leverage our compute, our AI expertise and capabilities, and then our mandate, which is the societal impact at scale, to think about the types of problems that we work on. I’ll give two good examples of those problems. One is around weather now casting, which we launched last year across the continent of Africa. So to have much more accurate weather forecast is absolutely essential, given that much of the continent and as well as in India rely on agriculture for labor. And we are rain fed primarily, 95 % in Africa. So having much more accurate weather forecast is essential in that case.

And at the same time, on the technical challenges side, we know in North America and in Europe, there’s about 300 or so weather radar stations. And in Africa, there’s only 37, I believe. You know you can fit both North America and Europe in Africa. So when you think about that, you have to innovate. And so those constraints of the environment that you were alluding to in the intro are part of the motivation of having a research team in the continent. And so that was one way that we innovated and made solutions that were available to the continent. And then the other one is a complementary side, which is working with the ecosystem, working with partners in Africa, including Macquarie University, Digital Umaga, and Uganda around Africa.

African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so languages. This is the start. Most importantly is it’s partnership -led and driven, and this is because it’s voice, it is about accessibility and about reaching those rural villages as well. So, and enabling the ecosystem to build the solutions from there, whether they’re smaller models or larger models. So, making that type of data open and available is another way that we are leveraging this notion of smaller AI.

Alpan Rawal

Thank you. Thank you. Great, great insights. Zamir, can you tell us about your work at Microsoft?

Wassim Hamidouche

Yeah. Thank you. Sorry. Yeah, Massim. Thank you for the invitation, and it’s a great pleasure to be here today. So, first, what is AI for Good? AI for Good labs. AI for Good lab is the phylogenetic research of… Microsoft. We are employing advanced AI technology to solve real world problem with real societal impact. This is very important. And how our team and the researcher work, we closely collaborate with NGOs, governments, nonprofit organization, and local communities around the world. And together we are building AI solutions on multiple domains. We are interested about agriculture, food security, healthcare, education, culture, and so on. So this is about AI for Good Lab at Microsoft. Now I am scientist, so I would like to give you two concrete examples where we use small AI and also there are two global solutions to tackle global challenges.

So they are valid for both, they are valid for both global and international north and global south. So the first project in biodiversity is called SPARO. SPARO for solar powered acoustic and remote recording observation. It is an AI powered open source solution designed to track and monitor biodiversity in the most remote and hard to reach region in the world. So SPARO is camera tracks with HAA model that enable to detect animal species and this observation are then transmitted using wireless connectivity and satellite where we don’t have infrastructures to transmit this information. And this SPARO solution is already deployed around the world in many countries. I can cite Colombia, Peru, United States, Tanzania and it’s really enable practitioners and the researcher to understand species present and the ecosystem.

at scale, supporting more timely and informed decision to protect biodiversity. The second project focuses on wildfires. As you know, wildfires becomes real threat, global threat with devastating, impacting lives, communities and ecosystems and even economies. And around the world, firewires are increasingly in both frequency and intensity, making early detection and rapid responses more critical than ever. So through Alert California, we are addressing this challenge using AI. So what is Alert California? Alert California is the network of 1300 cameras operating 24 -7. So what is Alert California? Alert California is the network of 1300 cameras operating 24 -7. And we are developing AI tools that runs on the top of this effort. structure enabling to detect early fire and this will enable emergency responder to act quick quickly and stop fires before it’s split.

So Sparrow and Alert California as I said they are two global solutions for global problems that can be deployed anywhere around the world and we are providing them open source that anyone can embrace them and deploy them. Thank you.

Alpan Rawal

Thank you. And when I think you’re the only member of this group that doesn’t work in the global south. So if you tell us a bit about the work at Paris and how you’re using maybe non foundation models. Right. Well

Antoine Tesniere

thank you Alpan for this invitation and I’m happy to be the outsider of the panel. I’m working actually in health care and I’m leading a new kind of innovation ecosystem for health care where we gather researchers. We. We have doctors patients. We have doctors patients. startups and industrial together as well as institutions so the idea is to really create a whole community of innovation and engage into the use of data and artificial intelligence and healthcare is probably one of the fields where AI has a long standing history and the world has discovered AI with the rise of Gen AI but there were a number of small AI models designed for a long time and this is why we already have a number of validated tools that we can use in healthcare answering the question not only does it work but is it reliable which is very important for our patients so before we have the proof of efficiency of LLMs in the medical field which is not fully clear yet, we use machine learning tools which are actually small AI models in very specific areas actually works really nice today is the image analysis or pattern analysis so you can think of radiology for example chest x -ray or fractures in the emergency room are fully analyzed by small AI models and small AI tools that are easily deployable on small computers you can also think of picture analysis in dermatology in ophthalmology etc so these are very concrete example of already validated small AI models in healthcare that are used on a daily basis at least in France and Europe we’ll get back in the discussions on how we need data efficacy on this topic but it’s really important to understand that these models are already deployable and some of them can actually work offline which is really important in some environments Thank you.

Alpan Rawal

Ilango from World Bank perspective. What is your view on these types of models? Thank

Illango Patchamuthu

you very much for the opportunity to be here. Coming right at the end, I don’t know what new things I can say, but to basically reinforce the messages that have been said. For us at the World Bank, we see AI as a means to an end, and very much of an AI agenda is shaped by the mission of the World Bank, which is to reduce poverty and grow prosperity in the world. And when you take that lens and you apply it, we have to keep it simple. Not all countries have the ability to have the compute power, the electricity, the talent, and the data. So therefore, taking on tested small AI applications to scale and replicating them around the world is something that we see as a mission priority.

So in that respect, what Badwani AI is doing here is pioneering. And what I’ve heard this morning from Dr. Sunil Badwani himself about what you’re doing in TB, what you’re doing in out -of -school children, this is all… tremendous and it has great potential for application and often what happens is we focus a lot on pilots and then what happens the pilots tend to kind of once the sheen wears off people forget the pilot I think what we need to do is and what we are doing at the World Bank is to see those pilots whether it’s in health education agriculture on the small AI setting it works in rural communities where offline where data is not that rich talent is not readily available and it can also not require a lot of electricity it’s plug plug and play then how do we get the right KPIs which then allows us to go from a village of community of 50 villages to a larger population center and to see how best we can help them say in agriculture to improve productivity you better inputs We are now working in UP in partnership with Google, and we are doing the same thing in Maharashtra.

Household income, the inputs get better, and to see how they can access the markets and agriculture credit. Similar in health and education, it’s in great practices that we are seeing in Africa, in Ghana, in Kenya. So how do we take these models and replicate it? So I’d like to assure everybody, and this is something people think, small AI is inferior. No. It’s not second class. No. Small AI can solve problems. It means to an end. And if it can actually fast -track development outcomes, you know, we’ve known the problems with millennium development goals. We’ve known the problems with sustainable development goals, and many countries are lagging behind. And this is an opportunity where this development technology, if it can be put to use in the right context in the right way, I think we can achieve faster

Alpan Rawal

Thank you. That’s really interesting. Thank you. Ayesha, I’m going to come back to you and ask you more specifically how does the work that Google Research does in Africa impact rural communities specifically? How does one bring the benefits of technologies like these big foundation models to devices that may only have patchy Internet and supply very little data?

Aisha Walcott-Bryant

Thanks, that was a loaded question. Two parts there. So I think, so first and foremost, in general, just approaching these challenges with humility and relating. So I always start with, you know, I’m a scientist, but I’m also a mother, right? And that’s a thread that I’ve been following for a long time. And that’s a thread that binds so many of us. When you think of that, you also think a lot of the solutions that we’re building, it’s not for them. for us. I’m using the same health systems that you all are developing, interesting tools and models for, you know, and we have many of the challenges around weather as well. So I think the first thing is to kind of have that base human layer as we think about our work and to connect with those communities, whether they’re rural or urban, right?

A lot of the work that we do, we’re looking at, you know, these large populations that, you know, if you think about agriculture, for example, where it has a large part of the labor force, you know, there’s many different ways that this is, you know, people are part of that value chain, whether they’re actually doing the growing or providing the inputs or making those decisions and the risks along the way. So that relationship of getting out in the community, getting out in the community, getting out in the community, getting out in the community is a very important part of work. that we do is to connect with those and then really think about you know kind of coming home as Google research you know where is our unique value proposition we’re not necessarily going to solve this whole problem alone usually it requires behavior change policy and many pieces of the puzzle how do we best fit our role and we do this in co -creation with partnerships so that’s kind of the the second layer of fabric on on how we reach these rural communities and then on the other side

Alpan Rawal

Do you have an example of that?

Aisha Walcott-Bryant

oh yeah yeah absolutely so if you think about okay I’ll do two ways so for example the the languages work that I was talking about wall and wall is a word it’s a Senegalese word that it’s wall of that means to speak and the way we wanted to create this you know we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we the community.

So if you have partners who are across the continent, let them be a part of the process of collecting the data, of understanding their language and their local context to get these high quality data sets. And so I think being partnership driven and knowing our role and our place was what was very successful for that. And then the last point I’ll just say on the second question that you threw in there is really our open models, our open weight models, Gemma, are made for a lot of these solutions that are more closer to the edge. So we have nano models that can run on your laptop and tablets and so forth,

Alpan Rawal

Do you actually use them in Africa?

Aisha Walcott-Bryant

Oh, yes. Yes, yes, yes, yes.

Alpan Rawal

Great. Thank you. The next question is for you. Much of your work at the foundation is about reducing inequities through promoting safe and responsible use of AI. So what role, in your view, do small and custom AI models have to play in this? And if you can provide examples, that would be great.

Zameer Brey

Sure, Alpan. You know, I think I do want to just touch a little bit on the issue of reliability because my colleague over here spoke about it. And I think it’s a critical issue. I’m sorry if I’m going to repeat this example from one of my previous panels. But I asked the audience, Alpan, because I’m going to be on the plane later, so it’s a bad idea, but I asked the audience anyway. If I said to you, the plane has a high probability of leaving Delhi and landing safely wherever that’s going to be, and that probability was 90%. 95%. Would you get on that flag? 99 %? No? No. I did have one guy that he kind of thought about it and he said no.

And the point there is that I do think we’ve got to work towards models that have zero error, right? So much so that I think that we are trying to wrap our heads around is there a concept of verifiable AI where it shifts the narrative from a black box to a glass box. It actually exposes the logic. So for a particular set of inputs, you can follow the logic chain and it gives you a set of outputs that you can really track. You can audit. You can see that it’s repeatable. And you can prevent some of the kind of fundamental errors that we start… to see. I think, you know, and I want to go back to a very real example, Alpan, because when I think about small models, I’m coming back to the user, the community alky worker that tried to help a mother.

And we have one of our grantees who shared this very personal story of a first -time mother who presented, she was six months pregnant, and she said her hands and her feet started to get swollen. And the community alky worker looked and said, you’re pregnant, this is normal. Four weeks later, she started having a headache and blur vision. I think colleagues will know where the story goes. Unfortunately, that mother had severe gestational proteinuric hypertension. It was missed. And the mother and the baby didn’t make it. But in that moment, what inspired our grantee was if the community health care worker had a small model that worked on her device, which was a low -cost smartphone, still had patchy internet, but was just built small enough to help her to make good decisions at that point of care.

Today, we actually would be sitting with a very different outcome today. And so I think small models present us those

Alpan Rawal

Very interesting. Very good points. Wasim, you spoke about, you know, in a general sense about the research done at the AI for Good Lab at Microsoft. Again, are there specific examples from your work where you see the benefits of building domain -specific models to realize impact? And are there research… lessons that we can take away from this? I think it would be good for the audience and us to understand what are the research directions of the future that can come out of this work.

Wassim Hamidouche

models from 4 billion to 15 billion. And once we select the best LLM for one target language, we do all these recipes to boost the performance for these low resource languages. But I wanted to get back to all the challenges we are facing for these low resource languages. So when we train these foundation models, we train them on internet data. And internet data represented by more than 60 % is English and followed by some high resource languages like French, Mondarian, Portuguese, etc. So this low resource language is, even if they represent more than 7 ,000 languages, they represent only a tiny portion of internet data. This is the first challenge. And the second one is the benchmarks. When we build LLM, we benchmark them, we evaluate the performance on benchmarks.

And we have seen, like, there are only at least one benchmarks for only 300 languages. So we can see, like, more than 6 ,000 or 7 ,000. They don’t have that many benchmarks. So we can see, like, more than 6 ,000 or 7 ,000. They don’t have even one benchmark. Even in these 300 benchmarks, most of them are just translation from English to this low -resource language. They have nothing to do with the culture and the context of these languages. The third challenge is the performance gap. Of course, there is a performance gap for these LLMs, even the frontier models between high -resource and low -resource languages. The fourth one is safety. When we build LLMs, usually we do some safety alignments with reinforcement learning, but these safety are mainly done in English and some of high -resource languages.

Now, when we build LLMs for low -resource languages, it becomes very strong for these low -resource languages. It raises some other issues with safety. We have to evaluate these LLMs for safety on this language and do all these alignments, reinforcement learning in the target language also. In this PO, we addressed some of these issues. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments.

We have been targeting three pilot languages, which are Inuktitut, spoken in north of Canada, indigenous language, Chichewa in Malawi in Africa, and the Maori in New Zealand. Why we have selected these three languages? Just because we have access to local community to help us to get data. So we gathered data from this community. Then we used some continual pre -training, instruction fine -tuning to boost the performance of open weights LLM, and we were able to gain 12 % balance gain, closing the gap with English. So now what’s next? Next steps, actually we are trying to expand this to more languages. We have some collaboration, for example, in North America with Paraguay to develop LLM for Guarani, and we want to extend this to other languages.

But most importantly, we have launched an initiative to help the community to get the best of the language. We have an initiative called Lengua Europe. We have a project called Lengua Europe. to fund data collection in Europe for 10 languages in Europe. It was released in last September. It was very successful. We have received many applications and 10 have been selected. And now we will start working with them. And it was that successful that now we are extending this initiative to Africa through Lingua Africa. It has been announced just today in the AI Summit. And we will be allocating 5 .5 million to support data collection for African languages. And this is in partnership with Gates Foundation and Microsoft Air for Good and FCDU.

And this initiative will be led by Masakani African Languages Hub.

Alpan Rawal

Sorry, just to follow up. So for people who are working on these small language models or domain, specific language models even, you know, say for healthcare domain or some other domain. Are there, you know, strategies? that they should pursue that you can recommend?

Wassim Hamidouche

Yeah, this is very important, and this is also related to the call for Lingua Africa because many efforts have been done in the past to collect general proposed data. Now we have enough, I think, general proposed data, but we evaluate the performance of these AI tools for application -specific, for example, healthcare, education, agriculture. They don’t work as we want, as expected. So what we want today is rather than, instead of focusing on general data collection, we will be focusing on domain -specific, application -specific, use case -specific data collections and building AI tools for specific domains. At least for this all reliability issues, we will have a model that performs good in that target level resource language in that application that we can deploy and we can be used by local communities and local communities.

This is really a priority for the next…

Alpan Rawal

Thank you. Ilango, let me come to you. You have vast experience in international development. Can you give us a view of the future as it relates to using AI for developmental goals? Do you think AI will have meaningful role to play in transition of emerging economies to advanced economies?

Illango Patchamuthu

So I do think the prospects are good and our North Star is job creation. And so we need to support countries that AI doesn’t automate jobs away, but AI actually supports the creation and enhancement of jobs. And this is where small AI becomes imperative, unlike the foundation models. Good question. Which will have implications. So the second is how are we going to go about it. And in some sense, whether it’s large language models or small AI solutions, you need an ecosystem. And that ecosystem needs to be powered by the local private sector. And often what we see even now, whether the AI revolution is before us or not, small enterprises, whether in the SME space or in the larger space, struggle for a variety of reasons.

And if the countries don’t reform business processes, make it easier for permitting, which AI can do, you’re going to see that AI actually is not going to play an effective role. So there are some fundamental reforms. And this is where some of the foundational investment in BPI, the digital public infrastructure, needs to happen to create that ecosystem and the ability for the ecosystem to then work with the private sector, the local communities, to be able to create those jobs. And this is what we… seeing everywhere, if that happens and here too you see this whole vibrancy around the startup ecosystem is why? Because everyone, the young people see opportunities and this momentum can drive everywhere in the world.

Whether it be in India, whether it be rest of South Asia or Africa or Latin America or even in the Pacific region. So how do you go about it? And what we did was we joined hands with a number of multilateral development banks and last couple of days we launched this small AI use case repository. It’s a good 100 cases. It explains in health education and agriculture and job creation how AI can be leveraged to the maximum advantage of communities. Both in terms of service delivery, productivity gains, household income gains. All this eventually leads to better jobs, better employment and better income prospects. So we are very much upbeat about small AI but I do take the point about community trust.

Once it fails, the community is not going to believe it. So it’s very important that whatever we put in place work with others, partners including the MDBs or Microsoft, Google, Gates and everyone. We have to ensure that whatever we leave behind in small communities is something trustworthy, reliable and it doesn’t end of the day hallucinate and give them something that the farmer struggles and ends up with other challenges. Thank you.

Alpan Rawal

So this report you mentioned, is it open access?

Illango Patchamuthu

On the World Bank we are hosting it. It’s called AI Repository. Just type and you’ll be able to access it. It’s got 100 and we’ll continue to update this and once we’re able to sort out some legal issues then we’ll also allow anyone to submit their use case repository obviously into the repository obviously we’ll go through a filtering process to ensure that the right ones are there.

Alpan Rawal

Great. Thank you. Antoine, coming to you. You have an organization that uses AI to advance health outcomes through research and commercialization. Are data -efficient and hardware -integrated AI models important for the work that’s happening at Parasante? And do you see these models as sort of potentially being deployed in low – and middle -income countries like India?

Antoine Tesniere

Yes, so clearly they are very important for us for different reasons. Of course, we’ll get back to the scalability and the use in low – and middle -income countries. But at first, what is the reality in healthcare is that data is scarce and siloed. And so you need to work on what you have, actually. So sometimes it’s a large set of data. Sometimes it’s a very small set of data. But you need to have tools that allow you to build relevant algorithms and relevant analysis. on small data sets. In the meantime, of course, we’re building larger data sets. Sometimes it’s at a level of one department in one hospital. Sometimes it’s one hospital. Sometimes it’s a group of hospitals.

At the end, what we are reaching out in Europe is the constitution of a large European health data space. 450 million citizens joining their health data in digital public infrastructure organized in 27 countries, which will be a world premiere. But in the meantime, we need to work on that reality of scarce data. Second thing is that not only data is limited, but also when you want to enter the new revolution in medicine, which is what we call precision medicine, personalized medicine, you need to work on very efficient algorithms because they need to adapt to one person and not only to a whole population. So you need also to get that into account in building the algorithm. The last thing is that You also have to work with what is existing in the healthcare systems, which is sometimes not supercomputers or high calculation power that exists in servers remotely.

But when you’re in a room of a patient or working in hospitals, it’s a very simple computer. And you need to have efficient algorithms and tools that you can have running on that kind of computers. And so, of course, you go all the way to a smartphone at some point if you go into remote areas. So this is why we actually work on this kind of approach, making sure that, of course, we have research on LLMs and large computing power. But we also have this work on small data, very efficient algorithm.

Alpan Rawal

Can you give examples?

Antoine Tesniere

Well, yes. I mean, I already gave some examples about radiology. We are able to. We have a radiology algorithm running on small computer machine. And getting back to your example, which I think is really important, it provides me the. opportunity to put two very important facts. One is that the AI that we use is providing information. It’s not making decisions in healthcare. So of course we target high level of reliability but at the end it’s a human decision and this is very important I think. Second one is that we’ve been trying to compare the performance of the algorithm that we’ve been designing with the existing performance. And of course you’re reaching to 99 .999 % etc. But what very few people actually know is that the actual performance of what we do at the moment is not 99 .999 % So most of the time and I won’t say the numbers but most of the times it’s actually better than what we have.

And this is really important in your example. Is it good enough compared to what we can actually do at the moment? And I think it’s particularly important in low – and middle -income countries because a very simple solution, offline LLMs, et cetera, can solve many, many issues.

Zameer Brey

Alban, can I pick up quickly? I think it’s really important, and actually I’m going to name the number if it’s okay. Okay, so a really important World Bank study from a few years back showed that on a set of five very simple conditions, the diagnostic accuracy was 50 % across eight countries. 50%. What illnesses are we talking about? Acute diarrhea, upper respiratory tract infection, maternal hypertension. And the point about that is I don’t think, any of us would be happy with 50%, the equivalent of tossing a coin and saying that’s okay. And so I completely understand that today there’s a big gap between what the models can offer. And I think the question about are the models performing better than the average clinician, that’s done.

Alpan Rawal

Sorry, I can’t resist the follow -up question. So often you find that average accuracy of models is far better. But models seem to fail more unpredictably than humans. At least that’s sort of the understanding in health care. Do you agree with that or do you think that’s not true? Anyone who wants to answer this.

Antoine Tesniere

Well, so I think we need another hour to discuss this. So what you say is absolutely true. But then you need to look at every pathology or every symptoms that you’re looking. Because the performance. The performance of diagnostic can be a little bit higher in certain places, in certain situations, a little bit lower, et cetera. But we get to the right to the same point, which is what we are building. is actually better than what we are able to do at the moment. And what we show in the scientific literature is that actually the combination between algorithm and natural intelligence, I would say the doctor, is actually the best tool so far. So the question, getting back to your question, how do we deploy this in low – and middle -income countries, I think it’s really important.

We need to have a model that are able to run on small devices. That are able to run offline. And sometimes it’s a very limited set of data, very limited set of algorithm. But if you, we were actually discussing in Paris about examples of remote LLM providing answer on the 10 most important questions for healthcare in low – and middle -income countries. That doesn’t need LLMs online with super calculation power. So that’s one first point, edge native AI. We also need to have data -efficient learning systems because most of the time in low – and middle -income countries, we have a limited amount of data available. So this is what I discussed earlier.

Alpan Rawal

We have a lot of data in India, but it tends to be noisy.

Antoine Tesniere

Yes, but we need to get the time to actually get them together, clean them, and get them prepared for robust analysis. So I know you are leapfrogging and going very fast, but by the time you will scale, this will create a real power of analysis. And then we need also to understand how we can couple hardware with software and algorithm to design reduced costs so that they can very easily scale. Thank you.

Alpan Rawal

Great. That was fantastic insight. I’d actually like to give some time. to the audience to ask questions to our panelists so yes please

Audience

thank you very much I’m Irish Kumar from the CSC Winnie Ocean Center on solar energy particularly in basement I’m belong to Rajesh son question a question to World Bank president very thanks to the World Bank in Rajesh on 60 % population rural areas and totally based on the agricultural domain 40 % population in the youth how our bank is increase the capacity of AI application to the youth as well agricultural domain so the economic changes more productivity more economy more you inclusion in climate change and renewable energy domain

Illango Patchamuthu

Thank you for that question, which I think is a very foundational question to ask any policymaker in terms of what kind of an AI strategy or implementation you want to have at any geography in the world. So obviously the first thing is you need digital literacy. Second, you need to skill up so that everybody is upskilled and reskilled on AI -related capabilities. Third is improving the STEM capability in schools and universities. So you do create a future cadre of people who can work on these topics. And then the sectors you mentioned, which are our priorities, agriculture, health, and education, obviously this is where we see the greatest potential for small AI. But particularly on Rajasthan, right now I don’t have any information, but I’m happy to share that with you.

But certainly we are working across different states in India like we’re doing elsewhere in the world. And we do prioritize literacies, skilling. STEM and applications in priority sectors like agriculture, health and education.

Alpan Rawal

But having said that, I also want to say one point. I mean, just to respond to devices that can do computing, devices are expensive for the bottom 40%.

Audience

Yeah. Hi, my name is Selena. I’m the CEO and co -founder of Zindi. We run competitions to develop models, especially in Africa. And I actually had a question for Wasim about kind of the technical implications, the size implications, the practicality of using… Open source, open weight models, you know, large language models to train very specific, domain specific, you know, language, you know, under -resourced language models. How have you seen that play out?

Wassim Hamidouche

Yeah, I think what we have seen, like the selection of the base model is very important. Because what is true, what is real, that we cannot train from scratch in LLM, even if small or large language model. We cannot train it from scratch for lower social languages because we don’t have this 15 trillion tokens to train. So it is very important to select the best multilingual model that has the right tokenizer that can be adapted to many lower social languages. This is very important. And then get the data that we need. And what we have seen also, monolingual data helps, but also bilingual data can help. And also translating English into this lower social language can also help to boost the performance.

So in our paper, we are providing all these three CPs to follow to get the best boost in terms of performance. What I would like to add also, with all these level source languages we have, text cannot solve them all. Many of these languages will be solved by speech. It’s very important. ISR models, speech -to -text, text -to -speech will build a very large role into unlocking all these level source languages in addition to LLM that can operate into level source language or in English.

Alpan Rawal

I think we have time for one really short question.

Audience

Hi, this is Dr. Ravi Singh. I’m from Miami, and it was a great panel, so a lot of great insights. It’s for Google, Microsoft, and the World Bank. Here’s the scenario. If there’s compliance across all of these platforms, which platform will win the AI wars?

Alpan Rawal

That’s a loaded question. Anyone want to answer? I’m not. So first of all, I think healthy competition is how we’ve been able to develop incredible technologies just over time. So the competition is healthy, and this is great. I don’t see it as a zero -sum game. There’s too many people on the planet, and there’s too many challenging, unique problems that need to be solved. So if we’re making it useful and bringing joy and happiness for all, that’s in the – I just love it, that’s in the theme here, then it’s not necessarily going to be who wins whatever platform. It’s what is relevant to the context of the end user. So taking it back to a more, like, human, personal perspective.

That’s my thinking.

Illango Patchamuthu

First, three billion people are offline, so there is space for everybody to compete. Second, in health sector alone, three and a half billion people don’t have access to healthcare, so there is enough scope for all kinds of applications.

Wassim Hamidouche

Just I want to add, many people have been asking me to, if this all efforts we are doing for language, if it is enough to make this model as good as English. I would say maybe not, but without all these efforts we would never reach this objective. So we have all these collective efforts will get us to this objective.

Alpan Rawal

Thank you so much, everyone. I would now like to invite Neha Butts, Associate Director, Human Resources, to just hand out the mementos to all our speakers. And we will just take one group photo. Thank you. One group photo, please. Requesting the speakers to just take one group photo, please. Thank you so much everyone Thank you everyone for joining Thank you Thank you

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Alpan Rawal defined small AI as data‑efficient, inexpensive, edge‑ready models built for specific local contexts of underserved communities rather than for a generic Global‑North audience.”

The knowledge base describes small AI as practical, affordable applications that work despite limited connectivity, data, skills, and infrastructure, emphasizing local language access on basic devices, which aligns with Rawal’s definition [S33].

Confirmedhigh

“Google released an open voice dataset covering 27 African languages (out of ~2 000) and open‑weight models such as Gemma that can run on laptops and tablets.”

Google’s Gemma family is described as lightweight open models designed for both commercial and research use and capable of running on modest hardware such as laptops, confirming the claim about Gemma’s suitability for edge devices [S92] and its open-source nature [S40].

Confirmedhigh

“Microsoft’s AI for Good Lab pursues real‑world impact through open‑source, low‑resource solutions and a collaborative model with NGOs, academia and local governments.”

The knowledge base notes Microsoft’s strategy of making AI technology more accessible globally via collaboration with regional partners and open-source approaches, supporting the description of the AI for Good Lab’s collaborative, low-resource focus [S94].

Additional Contextmedium

“Google Research Africa’s “Africa for Africa” philosophy reflects the view that Africa prioritises useful AI over the most powerful AI.”

A statement in the knowledge base highlights that “Africa is not looking for the most powerful AI, it’s looking for the most useful one,” providing contextual support for the “Africa for Africa” problem-first mindset [S91].

External Sources (94)
S1
How Small AI Solutions Are Creating Big Social Change — Aisha Walcott-Bryant from Google Research Africa emphasized the importance of innovation driven by constraints. She high…
S2
How Small AI Solutions Are Creating Big Social Change — – Aisha Walcott-Bryant- Antoine Tesniere- Illango Patchamuthu – Aisha Walcott-Bryant- Wassim Hamidouche- Antoine Tesnie…
S3
https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — Do you actually use them in Africa? Aisha Walcott-Bryant: Oh, yes. Yes, yes, yes, yes. I think that’s a great analogy….
S4
How Small AI Solutions Are Creating Big Social Change — Requesting you to please join the panel. Antoine, Antoine Tesniere, who’s a French professor of medicine and entrepreneu…
S5
https://app.faicon.ai/ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — Requesting you to please join the panel. Antoine, Antoine Tesniere, who’s a French professor of medicine and entrepreneu…
S6
How Small AI Solutions Are Creating Big Social Change — – Zameer Brey- Antoine Tesniere
S7
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — – Ken Ichiro Natsume- Prokar Dasgupta- Zameer Brey- Alain Labrique – Zameer Brey- Alain Labrique – Zameer Brey- Payden…
S8
How Small AI Solutions Are Creating Big Social Change — – Aisha Walcott-Bryant- Antoine Tesniere- Illango Patchamuthu – Illango Patchamuthu- Antoine Tesniere- Wassim Hamidouch…
S9
How Small AI Solutions Are Creating Big Social Change — Speakers:Aisha Walcott-Bryant, Antoine Tesniere, Illango Patchamuthu Speakers:Aisha Walcott-Bryant, Illango Patchamuthu…
S10
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — -Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned
S11
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned
S12
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned
S13
How Small AI Solutions Are Creating Big Social Change — Please, I would request you to take your seat on the panel. Wassim Hamidouche, who’s a principal research scientist at M…
S14
How Small AI Solutions Are Creating Big Social Change — -Antoine Tesniere- French professor of medicine and entrepreneur, specializing in health innovation and crisis managemen…
S15
https://app.faicon.ai/ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — Requesting you to please join the panel. Antoine, Antoine Tesniere, who’s a French professor of medicine and entrepreneu…
S16
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S17
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S18
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S19
How Small AI Solutions Are Creating Big Social Change — – Aisha Walcott-Bryant- Wassim Hamidouche- Antoine Tesniere – Illango Patchamuthu- Antoine Tesniere- Wassim Hamidouche
S20
How Small AI Solutions Are Creating Big Social Change — Speakers:Aisha Walcott-Bryant, Illango Patchamuthu, Wassim Hamidouche Speakers:Aisha Walcott-Bryant, Wassim Hamidouche …
S21
Building the Workforce_ AI for Viksit Bharat 2047 — Thank you. Thank you, Mr. Sir. Namaskar. It’s my privilege to extend a very warm welcome to all of you on behalf of Team…
S22
Digital democracy and future realities | IGF 2023 WS #476 — Current regulations may not fully consider the practices and needs of these platforms, which can impede their ability to…
S23
Driving Social Good with AI_ Evaluation and Open Source at Scale — Benchmarking should start with identifying specific problems through red teaming rather than building generic benchmarks…
S24
Driving Social Good with AI_ Evaluation and Open Source at Scale — Summary:Both speakers agreed that effective benchmarking requires first identifying specific problems and contexts rathe…
S25
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Fantastic. Professor. I think for me, the question of how do we move from promise to progress is underpinned by I think…
S26
AI: The Great Equaliser? — AI adoption and fixing digital adoption challenges should go hand in hand. Models using local languages can support comm…
S27
DC-DH: Health Digital Health & Selfcare – Can we replace Doctors in PHCs — Debbie Rogers: I definitely am a proponent of bringing technology into the mix to relieve some of the burden on the he…
S28
Keynote-Martin Schroeter — “Can it withstand cyber attacks and outages and data drift and regulatory scrutiny?”[26]. “Second, and more systemically…
S29
UNSC meeting: Artificial intelligence, peace and security — Brazil:Thank you, Mr. President, Mr. President, dear colleagues. I thank the Secretary General for his briefing today an…
S30
Global Perspectives on Openness and Trust in AI — Thank you for organizing this, colleagues. And good to be here and good to close out this exciting summit with you all. …
S31
Democratizing AI: Open foundations and shared resources for global impact — El-Assady emphasised the crucial distinction between “open source” and “open weight” models. Unlike models that merely s…
S32
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Balance between large foundational models and small specialized models
S33
How AI Drives Innovation and Economic Growth — Johannes advocates for small AI solutions that are practical, affordable, and locally relevant, specifically designed to…
S34
How AI Drives Innovation and Economic Growth — Appropriate technology solutions for developing countries Zutt advocates for a focus on ‘small AI’ rather than large-sc…
S35
Strategy outline — – 3.1 Encourage public-private sectors competition, promote entrepreneurship and innovation in the fields of…
S36
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Examples of sectoral self-regulations are in the case of Mauritius in the perspective of increasing the capacity of exis…
S37
Internet standards and human rights | IGF 2023 WS #460 — Financial, cultural, and language barriers exist in standard setting bodies. This acknowledgement suggests a growing re…
S38
Open Forum #68 WSIS+20 Review and SDGs: A Collaborative Global Dialogue — This comment established a key theme that resonated throughout the discussion – the need for practical, systems-thinking…
S39
AI for Social Good Using Technology to Create Real-World Impact — This argument describes the World Bank’s role in scaling successful digital solutions from India to other developing cou…
S40
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — By making this technology accessible to developers worldwide, Google aims to spark an innovation domino effect, as evide…
S41
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Sustainability, enforcement, and global open standards play crucial roles. Government partnership and cooperation are es…
S42
Democratizing AI Building Trustworthy Systems for Everyone — Lingua Africa initiative launched to collect local data with communities for spoken languages in partnership with Gates …
S43
WS #323 New Data Governance Models for African Nlp Ecosystems — Deshni Govender: Sure. I think it’s important also to point out that when we mention the concept of extractive practices…
S44
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Key to this trajectory are collaborative and inclusive policy governance, culturally attuned ethical frameworks, and bro…
S45
Regional Leaders Discuss AI-Ready Digital Infrastructure — A significant theme emerged around balancing AI’s potential for economic transformation with legitimate concerns about e…
S46
Sustainable development — AI can assist governments inidentifying poverty-stricken regions and facilitating globalefforts through the analysis of …
S47
Is AI a catalyst for development? — The Economist argues that AI has the potential to revolutionise developing countries by transforming their economies and…
S48
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Brey uses the analogy of flight safety to argue that healthcare requires perfect accuracy from AI systems. He contends t…
S49
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Jigar Halani articulated the complexity of trust requirements across different user groups: while IT professionals might…
S50
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — We would fly. And then go, oh, right, or we’ll take some other means of transport. And the reason I’m emphasizing this i…
S51
Building the Workforce_ AI for Viksit Bharat 2047 — Thank you. Thank you, Mr. Sir. Namaskar. It’s my privilege to extend a very warm welcome to all of you on behalf of Team…
S52
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Ioanna Ntinou: I think that my question will be, as a researcher, if we focus so much on having smaller models, if we ac…
S53
Why smaller AI models may be the smarter choice — Most everyday jobs do not actually need the most powerful, cutting-edge AI models, argues Jovan Kurbalija in his blog po…
S54
How Small AI Solutions Are Creating Big Social Change — This comment directly addresses a critical perception problem in the field – the assumption that ‘small’ means ‘inferior…
S55
How Small AI Solutions Are Creating Big Social Change — Walcott-Bryant emphasized that with so many people on the planet and so many unique problems to solve, competition shoul…
S56
Building Trusted AI at Scale – Keynote Anne Bouverot — This comment shifts the discussion from acknowledging competition to actively proposing strategic alliances. It introduc…
S57
Is the AI bubble about to burst? Five causes and five scenarios — Centralised, closed platforms vs. decentralised, open ecosystems. Historically,open systems often win in the long run– …
S58
WS #219 Generative AI Llms in Content Moderation Rights Risks — ### The Low-Resource Language Crisis Dhanaraj Thakur: Yeah, great. Thank you, Marlena. And thanks for the invitation to…
S59
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: Thank you for convening this and bringing this very, very important subject at FORC, like how do we bala…
S60
Democratizing AI Building Trustworthy Systems for Everyone — The participant points out that trustworthiness depends on system responsiveness, accessibility and reliability at the e…
S61
Building Population-Scale Digital Public Infrastructure for AI — Safety considerations were paramount, particularly in healthcare applications where lives are at stake. The panelists st…
S62
Experts propose frameworks for trustworthy AI systems — A coalition of researchers and experts hasidentifiedfuture research directions aimed at enhancing AI safety, robustness …
S63
Open Forum #15 Building Bridges for WSIS Plus a Multistakeholder Dialogue — Flavio Vagner: So thank you, Isabel. Thank you for having me in the session and for the question, yeah. But first of all…
S64
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The connectivity challenge adds another layer of complexity, with 2.6 billion people remaining offline globally. Werner …
S65
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Alain Ndayishimiye:Yes thank you moderator once again let me take the opportunity to greet everyone whatever you are in …
S66
WS #208 Democratising Access to AI with Open Source LLMs — The speaker suggests that public-private partnerships could help support open AI development in regions like Africa. Thi…
S67
Democratizing AI: Open foundations and shared resources for global impact — All speakers emphasize the critical importance of international collaboration in AI development, with Switzerland positi…
S68
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S69
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Balance between large foundational models and small specialized models
S70
Open Internet Inclusive AI Unlocking Innovation for All — Firstly, hi, everyone. Great to have all of you here. So I think the first thing is, look, Matthew, I don’t know whether…
S71
AI that serves communities, not the other way round — At theWSIS+20 High-Level Eventin Geneva, a vivid discussion unfolded around how countries in the Global South can build …
S72
https://app.faicon.ai/ai-impact-summit-2026/regional-leaders-discuss-ai-ready-digital-infrastructure — try to invest in the township planning and the implementation. Also, we can have a water supply road project that can be…
S73
Strategy outline — – 3.1 Encourage public-private sectors competition, promote entrepreneurship and innovation in the fields of…
S74
Strategy — ‘Foster the use of AI in vital developmental sectors using partnerships with local beneficiaries and local or foreign te…
S75
Artificial intelligence across industries — The analysis of various specific use cases pertaining to these four domains provides clear evidence that artificial inte…
S76
How Small AI Solutions Are Creating Big Social Change — Wassim Hamidouche outlines the systematic challenges facing low-resource language AI development. He explains that despi…
S77
Internet standards and human rights | IGF 2023 WS #460 — Financial, cultural, and language barriers exist in standard setting bodies. This acknowledgement suggests a growing re…
S78
WS #219 Generative AI Llms in Content Moderation Rights Risks — Dhanaraj Thakur provided extensive analysis of how language inequities create systematic discrimination in LLM-based con…
S79
How AI Drives Innovation and Economic Growth — Appropriate technology solutions for developing countries Zutt advocates for a focus on ‘small AI’ rather than large-sc…
S80
New Development Actors for the 21st Century / DAVOS 2025 — World Bank initiatives for education and job creation
S81
Democratizing AI Building Trustworthy Systems for Everyone — Lingua Africa initiative launched to collect local data with communities for spoken languages in partnership with Gates …
S82
The rise of large language models and the question of ownership — What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate va…
S83
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — By making this technology accessible to developers worldwide, Google aims to spark an innovation domino effect, as evide…
S84
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — The moderator opens, transitions, and closes the session, guaranteeing that speakers are introduced, the discussion proc…
S85
AI for Good Technology That Empowers People — This experience led Mahalingam to develop innovative edge computing solutions using optimised models and local processin…
S86
Safe Smart Cities and Climate Frustration — Johan Stander: Thank you. Thank you for the opportunity, Thomas. Thank you. It’s great to be here and to represent the W…
S87
29, filed Jan. 22, 2010, at 9-10. — of Primary Health Care Management Information System, Scope Repository retrieved via the HRSA Geospatial Data Warehouse’…
S88
Biology as Consumer Technology — Additionally, the analysis underscores the significance of technology and partnerships in agriculture. Abbott notes that…
S89
State of Play: AI Governance / DAVOS 2025 — Krishna emphasizes the need to drive down the cost of AI technology to make it more inclusive and accessible globally. H…
S90
Host Country Open Stage — D Silva emphasized the transformative potential of sustainability reporting, stating that “transparency is not just abou…
S91
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Lacina Kone’s observation that “Africa is not looking for the most powerful AI, it’s looking for the most useful one” re…
S92
Google launches Gemma: New family of lightweight open models — Googlehas launchedGemma, a new family of lightweight open models that includes Gemma 2B and Gemma 7B. These models were …
S93
AI for Good Technology That Empowers People — Kumar advocates for making AI applications available as open source through platforms like ITU’s AI for Good, enabling t…
S94
Discussion Report: AI Implementation and Global Accessibility — Chadha describes Microsoft’s infrastructure development strategy to make AI technology more accessible globally. He expl…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Alpan Rawal
2 arguments127 words per minute1158 words546 seconds
Argument 1
Small AI as data‑efficient, edge‑ready, context‑specific models (Alpan Rawal)
EXPLANATION
Alpan defines small AI as models that require minimal data, can run on low‑cost edge devices, and are tailored to the specific needs of local communities. He emphasizes that such models should be affordable, lightweight, and directly relevant to underserved populations.
EVIDENCE
In his opening remarks Alpan explains that small AI reflects the ethos of Wadwani AI’s work by making models that are data-efficient, cheap to run, sit on the edge, and are meaningful to the communities they serve, especially in rural India [15-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Aisha’s partnership-led voice dataset and edge-ready models illustrate data-efficient, low-cost AI for rural communities, and the panel highlights edge deployment for small models [S2][S30][S31].
MAJOR DISCUSSION POINT
Definition of Small AI
AGREED WITH
Aisha Walcott‑Bryant, Zameer Brey, Antoine Tesniere, Illango Patchamuthu
Argument 2
Healthy competition among platforms; relevance determined by local context rather than a single “winner” (Alpan Rawal)
EXPLANATION
Alpan argues that competition between AI platforms is beneficial and should not be viewed as a zero‑sum game. The value of any platform depends on how well it serves the specific context and needs of end users.
EVIDENCE
He states that healthy competition has driven technological progress, that there are many people and problems to solve, and that relevance to the end-user’s context matters more than which platform “wins” [386-393].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion on competition and the need for multiple choices aligns with observations on platform competition and user preference in digital ecosystems [S22].
MAJOR DISCUSSION POINT
Platform competition
Z
Zameer Brey
4 arguments123 words per minute795 words385 seconds
Argument 1
AI must fit local use‑cases rather than generic benchmarks (Zameer Brey)
EXPLANATION
Zameer stresses that AI solutions should be evaluated based on how they work in specific local settings, not just against standard benchmarks. He uses analogies to illustrate the need for context‑aware design.
EVIDENCE
He notes that the focus should be on whether AI works for a district hospital in Telangana, a farmer in Zambia, or a classroom in rural Senegal, rather than on abstract model performance, and compares designing a solution for Delhi traffic to building a smaller, faster, cost-effective tool instead of a large airplane [29-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Brey stresses evaluating AI in specific settings rather than benchmarks, and experts note that effective benchmarking should start with problem identification [S2][S23][S24].
MAJOR DISCUSSION POINT
Local relevance of AI
Argument 2
Need for verifiable, zero‑error AI to support community health workers (Zameer Brey)
EXPLANATION
Zameer argues that AI models used in critical health contexts must be virtually error‑free and auditable, turning black‑box systems into transparent, traceable tools. This reliability is essential for community health workers making life‑saving decisions.
EVIDENCE
He describes a scenario where a pregnant mother died because a community health worker lacked a reliable AI tool on a low-cost smartphone; he suggests that a zero-error, verifiable model could have prevented the tragedy [166-174].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Brey’s call for zero-error, auditable AI for health workers is supported by concerns about diagnostic accuracy and the need for trustworthy systems in low-resource settings [S26][S1][S28].
MAJOR DISCUSSION POINT
Zero‑error, auditable AI
AGREED WITH
Antoine Tesniere, Illango Patchamuthu
Argument 3
Small model on low‑cost smartphones to aid community health workers in maternal care (Zameer Brey)
EXPLANATION
Zameer highlights the potential of deploying compact AI models on inexpensive smartphones to assist frontline health workers in diagnosing and managing maternal health risks. Such tools can operate with limited connectivity and provide timely decision support.
EVIDENCE
He recounts the story of a first-time mother whose hypertension was missed, noting that a small AI model on a low-cost smartphone with patchy internet could have offered critical guidance at point-of-care [166-174].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Brey illustrates a low-cost smartphone with a small model for community health workers, demonstrating feasibility of such deployments [S2][S26].
MAJOR DISCUSSION POINT
Mobile health AI
AGREED WITH
Alpan Rawal, Aisha Walcott‑Bryant, Antoine Tesniere, Illango Patchamuthu
Argument 4
Requirement for zero‑error, auditable AI to avoid critical failures (Zameer Brey)
EXPLANATION
Zameer emphasizes that AI systems, especially in health, must achieve near‑perfect accuracy and provide transparent reasoning paths to prevent catastrophic errors. He calls for a shift from opaque black‑boxes to “glass‑box” models that can be audited.
EVIDENCE
He discusses the need for models with zero error, describing a verifiable AI that exposes its logic chain, allowing outputs to be tracked and audited for repeatability [160-165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The requirement for zero-error, auditable AI echoes the same evidence on safety and trust in health applications [S26][S28].
MAJOR DISCUSSION POINT
Auditable AI
A
Aisha Walcott‑Bryant
4 arguments0 words per minute0 words1 seconds
Argument 1
Problem‑first, partnership‑driven approach to building small AI solutions (Aisha Walcott‑Bryant)
EXPLANATION
Aisha explains that Google Research Africa starts by identifying concrete problems before applying AI, and then works through partnerships to co‑create solutions. This ensures relevance and sustainability of the technology.
EVIDENCE
She states that their work is “problem first,” citing the example of building a simple “red button” when appropriate, and emphasizes thoughtful problem selection before bringing AI into play [44-48]. She also mentions collaborations with African universities and partners to develop datasets and solutions [60-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Aisha describes a problem-first, partnership-driven methodology, emphasizing co-creation with local partners, which matches the partnership-led approach highlighted in the sources [S2][S5][S30].
MAJOR DISCUSSION POINT
Problem‑first methodology
AGREED WITH
Illango Patchamuthu, Wassim Hamidouche
Argument 2
Creation of African voice datasets and open‑weight models for edge deployment (Aisha Walcott‑Bryant)
EXPLANATION
Aisha describes the development of a multilingual voice dataset covering 27 African languages and the release of open‑weight models that can run on laptops and tablets. These resources aim to improve accessibility in rural villages.
EVIDENCE
She notes the release of a dataset of 27 voice languages out of Africa’s 2,000 languages, highlighting its partnership-led nature and focus on accessibility for rural villages [61-63]. She also references open-weight models (e.g., Gemma) that can run on edge devices such as laptops and tablets [144-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The release of a 27-language African voice dataset and open-weight models for edge devices is documented in the sources [S2][S31].
MAJOR DISCUSSION POINT
African language data and edge models
AGREED WITH
Wassim Hamidouche
Argument 3
Accurate weather forecasting for rain‑fed agriculture; multilingual voice dataset for accessibility (Aisha Walcott‑Bryant)
EXPLANATION
Aisha outlines two concrete projects: improving weather forecasts to support rain‑fed farmers and creating multilingual voice datasets to enable communication in remote areas. Both initiatives address critical infrastructure gaps in Africa.
EVIDENCE
She details the launch of a continent-wide weather-forecasting service, noting the scarcity of radar stations in Africa (only 37 versus 300 in North America/Europe) and its importance for rain-fed agriculture [50-58]. She also mentions the African voice dataset of 27 languages to improve accessibility in rural villages [61-63].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The continent-wide weather-forecasting service and multilingual voice dataset are detailed as examples of context-specific solutions for rain-fed agriculture [S30][S2].
MAJOR DISCUSSION POINT
Weather forecasting & language accessibility
Argument 4
Partnership‑led data collection and open models to enable local innovation (Aisha Walcott‑Bryant)
EXPLANATION
Aisha stresses that collaborating with local partners to collect high‑quality data and releasing open models empowers African innovators to build their own solutions. This approach fosters a sustainable AI ecosystem on the continent.
EVIDENCE
She describes working with partners such as Macquarie University and Digital Umaga to collect data, and emphasizes that making data open and available enables local ecosystems to develop solutions using both small and large models [60-64][144-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaboration with local partners for data collection and open model release is emphasized in the partnership-led initiatives described [S2][S5][S30][S31].
MAJOR DISCUSSION POINT
Collaborative data and model sharing
W
Wassim Hamidouche
6 arguments152 words per minute1552 words609 seconds
Argument 1
Open‑source, domain‑specific small AI for biodiversity and wildfire monitoring (Wassim Hamidouche)
EXPLANATION
Wassim presents two open‑source projects—SPARO for biodiversity monitoring and Alert California for wildfire detection—demonstrating how small, domain‑specific AI can be deployed globally to address environmental challenges.
EVIDENCE
He describes SPARO, an AI-powered solar-powered acoustic and remote observation system that tracks animal species and transmits data via satellite, already deployed in countries such as Colombia, Peru, the United States, and Tanzania [82-88]. He also outlines Alert California, a network of 1,300 cameras with AI tools that detect early fires to enable rapid emergency response [89-97].
MAJOR DISCUSSION POINT
Domain‑specific environmental AI
Argument 2
Lack of data, benchmarks, performance gap, and safety issues for low‑resource languages (Wassim Hamidouche)
EXPLANATION
Wassim highlights four major challenges for low‑resource languages: insufficient training data, few or no evaluation benchmarks, a performance gap compared to high‑resource languages, and safety/alignment concerns that are often only addressed in English.
EVIDENCE
He notes that over 60 % of internet data is English, leaving low-resource languages under-represented, that only about 300 languages have at least one benchmark, that performance gaps persist, and that safety alignments are primarily done in English, making safety in low-resource languages a significant issue [185-200].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The limited coverage of only 27 out of ~2,000 African languages underscores the data scarcity and benchmark gaps for low-resource languages [S2].
MAJOR DISCUSSION POINT
Challenges for low‑resource language AI
AGREED WITH
Aisha Walcott‑Bryant
Argument 3
Ensuring model safety and alignment in low‑resource language contexts (Wassim Hamidouche)
EXPLANATION
Wassim stresses the importance of conducting safety evaluations and reinforcement‑learning alignment in each target language, not just in English, to prevent harmful outputs in low‑resource contexts.
EVIDENCE
He explains that safety alignments are usually performed in English and high-resource languages, and that for low-resource languages these safety measures must be evaluated and aligned directly in the target language [198-203].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safety and alignment challenges for low-resource languages are highlighted, with calls for evaluations in target languages and building confidence in AI systems [S26][S28].
MAJOR DISCUSSION POINT
Safety and alignment for low‑resource LLMs
Argument 4
Lingua Europe/Africa initiatives funding language data collection and supporting community hubs (Wassim Hamidouche)
EXPLANATION
Wassim describes the Lingua Europe project, which funded data collection for ten European languages, and its extension, Lingua Africa, which will allocate $5.5 million to support African language data collection in partnership with the Gates Foundation and other entities.
EVIDENCE
He mentions the launch of Lingua Europe, the selection of ten language projects, and the announcement of Lingua Africa with a $5.5 million budget to fund data collection for African languages, led by the Masakani African Languages Hub [218-229].
MAJOR DISCUSSION POINT
Funding language data initiatives
AGREED WITH
Aisha Walcott‑Bryant, Illango Patchamuthu
Argument 5
Selecting appropriate multilingual base models and leveraging bilingual data for low‑resource LLMs (Wassim Hamidouche)
EXPLANATION
Wassim argues that choosing the right multilingual base model and augmenting it with monolingual, bilingual, and translated data are key strategies to improve performance for low‑resource languages without training from scratch.
EVIDENCE
He states that selecting the best multilingual model with a suitable tokenizer is crucial, and that monolingual, bilingual, and translated data all help boost performance, as outlined in their paper’s recommendations [367-375].
MAJOR DISCUSSION POINT
Base model selection and data strategies
Argument 6
Collective effort needed to bring low‑resource language models up to parity with English (Wassim Hamidouche)
EXPLANATION
Wassim acknowledges that while achieving English‑level performance for low‑resource languages may be unrealistic now, coordinated global efforts are essential to progressively close the gap.
EVIDENCE
He remarks that many people ask if the current efforts are enough to match English performance, and he responds that without collective work the objective will never be reached, emphasizing the need for joint efforts [396-398].
MAJOR DISCUSSION POINT
Collaboration for language parity
A
Antoine Tesniere
4 arguments151 words per minute1246 words492 seconds
Argument 1
Validated small AI models already deployed in healthcare (Antoine Tesniere)
EXPLANATION
Antoine points out that small AI tools have long been used in European healthcare, delivering reliable image and pattern analysis for radiology, dermatology, and ophthalmology, often operating offline on modest hardware.
EVIDENCE
He notes that radiology (e.g., chest X-ray, fracture detection) and other specialties already rely on small AI models that can be deployed on small computers and run offline, with validated performance in France and Europe [102-108].
MAJOR DISCUSSION POINT
Existing healthcare AI tools
Argument 2
Data‑efficient learning required due to scarce healthcare data (Antoine Tesniere)
EXPLANATION
Antoine explains that healthcare data is often limited and siloed, requiring algorithms that can learn effectively from small datasets and operate on low‑power devices.
EVIDENCE
He describes the scarcity of data in healthcare, the need for data-efficient algorithms that work on small data sets, and the necessity for tools that can run on simple computers or smartphones in remote settings [280-286].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
World Bank findings on low diagnostic accuracy illustrate the scarcity of high-quality health data, reinforcing the need for data-efficient learning methods [S1].
MAJOR DISCUSSION POINT
Data‑efficient healthcare AI
Argument 3
Radiology, dermatology, ophthalmology AI tools running on small computers; offline capability (Antoine Tesniere)
EXPLANATION
Antoine highlights that AI models for image analysis in radiology, dermatology, and ophthalmology can be run on compact hardware without internet connectivity, providing decision support while keeping the final judgment with clinicians.
EVIDENCE
He mentions radiology algorithms operating on small machines and stresses that AI provides information but does not make decisions, emphasizing offline capability and human oversight [304-306][333-339].
MAJOR DISCUSSION POINT
Edge AI in medical imaging
Argument 4
Offline, low‑power edge devices needed; AI should augment, not replace, human decisions (Antoine Tesniere)
EXPLANATION
Antoine asserts that AI systems must be designed to run on low‑power edge devices and should serve as decision‑support tools rather than autonomous decision makers, preserving clinician authority.
EVIDENCE
He states that AI should provide information, not make decisions, and that models must run on simple computers or smartphones, especially in remote areas, to augment human expertise [304-306][333-339].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recommendations for low-power edge devices and human-in-the-loop AI to build trust are discussed in the context of secure, reliable ICT use [S28][S26].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop AI
AGREED WITH
Zameer Brey, Illango Patchamuthu
I
Illango Patchamuthu
6 arguments159 words per minute1217 words459 seconds
Argument 1
Small AI is not inferior; it serves development goals and poverty reduction (Illango Patchamuthu)
EXPLANATION
Illango counters the perception that small AI is second‑class, arguing that it can effectively address development challenges, improve livelihoods, and accelerate outcomes in health, education, and agriculture.
EVIDENCE
He explicitly states that small AI is not inferior, can solve problems, and can fast-track development outcomes, referencing the shortcomings of past development goals and the opportunity AI presents [120-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The impact of small AI solutions on social change and community empowerment is highlighted in the discussion of partnership-led projects [S2].
MAJOR DISCUSSION POINT
Value of small AI
AGREED WITH
Alpan Rawal, Aisha Walcott‑Bryant, Zameer Brey, Antoine Tesniere
Argument 2
Trustworthy, reliable models essential for community adoption (Illango Patchamuthu)
EXPLANATION
Illango emphasizes that for AI to be accepted in communities, models must be reliable, avoid hallucinations, and consistently deliver accurate results, otherwise trust erodes.
EVIDENCE
He warns that community trust is lost if models fail, stressing the need for trustworthy, reliable solutions that do not hallucinate or cause additional challenges for farmers [263-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Building confidence and security in AI deployments, especially for community adoption, is emphasized in the sources on trustworthy systems [S28][S26].
MAJOR DISCUSSION POINT
Building community trust
AGREED WITH
Zameer Brey, Antoine Tesniere
Argument 3
Scaling pilots with clear KPIs; fostering private‑sector ecosystems; digital literacy and skilling (Illango Patchamuthu)
EXPLANATION
Illango outlines a strategy to scale AI pilots by defining measurable KPIs, encouraging private‑sector participation, and investing in digital literacy and skills development to ensure sustainable impact.
EVIDENCE
He discusses the need to keep AI simple, replicate successful pilots with clear KPIs, develop private-sector ecosystems, and promote digital literacy and skilling as part of the World Bank’s agenda [112-119][243-250].
MAJOR DISCUSSION POINT
Scaling and ecosystem building
Argument 4
AI as a catalyst for job creation and economic development, not just automation (Illango Patchamuthu)
EXPLANATION
Illango argues that AI should be leveraged to create and enhance jobs rather than merely replace them, positioning AI as a driver of economic growth and employment in emerging economies.
EVIDENCE
He states that the North Star is job creation, and that AI must support the creation and enhancement of jobs, distinguishing it from automation [243-246].
MAJOR DISCUSSION POINT
AI for job creation
AGREED WITH
Alpan Rawal, Zameer Brey, Wassim Hamidouche
Argument 5
Large offline population leaves room for multiple AI platforms to coexist (Illango Patchamuthu)
EXPLANATION
Illango points out that billions of people remain offline, providing ample space for various AI platforms to serve different needs without a single winner dominating the market.
EVIDENCE
He notes that three billion people are offline and that three and a half billion lack healthcare access, indicating ample opportunity for diverse AI solutions [394-395].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The large offline population and the need for multiple platform choices are reflected in analyses of competition and user preference in digital ecosystems [S22].
MAJOR DISCUSSION POINT
Market space for multiple AI platforms
Argument 6
AI use‑case repository covering health, education, agriculture, and job creation (Illango Patchamuthu)
EXPLANATION
Illango describes a publicly accessible repository of about 100 AI use cases across key sectors, intended to showcase how AI can improve service delivery, productivity, and household incomes, thereby supporting development goals.
EVIDENCE
He mentions the launch of a small AI use-case repository with roughly 100 cases spanning health, education, agriculture, and job creation, detailing its role in illustrating AI’s maximum advantage for communities [258-262].
MAJOR DISCUSSION POINT
AI use‑case repository
AGREED WITH
Aisha Walcott‑Bryant, Wassim Hamidouche
A
Announcer
1 argument84 words per minute224 words158 seconds
Argument 1
A multi‑sectoral panel composition is essential for addressing small AI challenges across regions and domains.
EXPLANATION
The Announcer highlights that the session brings together experts from Microsoft, the World Bank, Google Research Africa, and health innovation, emphasizing the need for diverse perspectives to tackle small AI for social impact.
EVIDENCE
The introduction lists the panelists: a principal research scientist from Microsoft’s AI for Good Lab, a World Bank Director of Strategy and Operations, a senior staff research scientist heading Google Research Africa, and a French professor of medicine and entrepreneur, followed by the moderator from Wadwani AI [1-14].
MAJOR DISCUSSION POINT
Importance of diverse stakeholder representation
A
Audience
2 arguments106 words per minute216 words121 seconds
Argument 1
Building AI capacity for youth in agriculture is critical to improve productivity, economic inclusion, and climate resilience.
EXPLANATION
The audience member stresses that AI applications must be scaled to empower young people in the agricultural sector, thereby enhancing livelihoods and addressing climate‑change challenges.
EVIDENCE
The audience asks how the World Bank can increase AI capacity for youth and the agricultural domain to drive economic productivity, inclusion, and climate-change mitigation [348-351].
MAJOR DISCUSSION POINT
AI for youth empowerment and sustainable agriculture
Argument 2
The AI ecosystem should avoid a winner‑takes‑all narrative because billions of people remain offline, allowing multiple platforms to coexist.
EXPLANATION
The audience questions which platform will “win” the AI wars, implicitly suggesting that competition should not be framed as a zero‑sum game given the large offline population.
EVIDENCE
The audience poses a scenario asking which platform will win if compliance across platforms is achieved, highlighting concerns about platform competition [380-382].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The audience’s concern about a winner-takes-all scenario aligns with observations that promoting competition and multiple choices is essential for inclusive AI adoption [S22].
MAJOR DISCUSSION POINT
Platform competition and inclusivity
A
Aisha Walcott-Bryant
1 argument167 words per minute1139 words408 seconds
Argument 1
Establishing research hubs in Ghana and Kenya creates regional centres that build local AI talent and ensure solutions reflect East and West African contexts.
EXPLANATION
Aisha explains that Google Research Africa operates two sites—one in Ghana and one in Kenya—providing geographic coverage that supports region‑specific research, data collection, and capacity building.
EVIDENCE
She states, “We have two sites, one in Ghana and one in Kenya, so representing East and West” [38-40].
MAJOR DISCUSSION POINT
Regional research infrastructure for AI
Agreements
Agreement Points
Small AI should be data‑efficient, edge‑ready, and context‑specific
Speakers: Alpan Rawal, Aisha Walcott‑Bryant, Zameer Brey, Antoine Tesniere, Illango Patchamuthu
Small AI as data‑efficient, edge‑ready, context‑specific models (Alpan Rawal) Creation of African voice datasets and open‑weight models for edge deployment (Aisha Walcott‑Bryant) Small model on low‑cost smartphones to aid community health workers in maternal care (Zameer Brey) Offline, low‑power edge devices needed; AI should augment, not replace, human decisions (Antoine Tesniere) Small AI is not inferior; it serves development goals and poverty reduction (Illango Patchamuthu)
All speakers stress that small AI must use minimal data, run on inexpensive edge devices, and be tailored to the specific needs of local communities, from health workers in rural clinics to agricultural users, rather than relying on generic, large-scale models. [15-19][60-64][144-145][166-174][333-339][120-124]
POLICY CONTEXT (KNOWLEDGE BASE)
This view echoes calls for sustainable, edge-focused AI that serves local needs, as discussed in workshops on greener AI and the importance of edge reliability for health and agriculture [S52][S53][S60].
Collaboration, open‑source, and partnership‑driven development of AI resources
Speakers: Aisha Walcott‑Bryant, Illango Patchamuthu, Wassim Hamidouche
Problem‑first, partnership‑driven approach to building small AI solutions (Aisha Walcott‑Bryant) AI use‑case repository covering health, education, agriculture, and job creation (Illango Patchamuthu) Lingua Europe/Africa initiatives funding language data collection and supporting community hubs (Wassim Hamidouche)
The panelists agree that co-creation with local partners, open-source releases, and shared repositories are essential to build relevant AI tools and to scale them sustainably. Aisha highlights partnership-led data collection, Illango describes a public use-case repository and ecosystem building, and Wassim outlines funded language-data initiatives. [60-64][144-145][258-262][243-250][218-229]
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on open-source collaboration aligns with inclusive AI policy frameworks and multistakeholder coalitions highlighted in inclusive AI governance reports and the ‘coalitions of the willing’ concept [S44][S56][S66][S67].
Trustworthiness, reliability, and human‑in‑the‑loop design are crucial for AI, especially in health
Speakers: Zameer Brey, Antoine Tesniere, Illango Patchamuthu
Need for verifiable, zero‑error AI to support community health workers (Zameer Brey) Offline, low‑power edge devices needed; AI should augment, not replace, human decisions (Antoine Tesniere) Trustworthy, reliable models essential for community adoption (Illango Patchamuthu)
All three emphasize that AI systems must be highly reliable, transparent, and serve as decision-support rather than autonomous decision-makers, to maintain trust in critical sectors like healthcare. Zameer calls for zero-error, auditable models; Antoine stresses human-in-the-loop and offline capability; Illango warns that failures erode community trust. [160-165][166-174][304-306][333-339][263-267]
POLICY CONTEXT (KNOWLEDGE BASE)
Health-focused AI standards calling for zero-error, auditable, and human-in-the-loop systems were articulated at WHO roundtables and reinforced by trustworthiness guidelines for high-stakes public services [S48][S49][S50][S61][S62][S60].
Addressing data scarcity and low‑resource language challenges through dedicated datasets and models
Speakers: Wassim Hamidouche, Aisha Walcott‑Bryant
Lack of data, benchmarks, performance gap, and safety issues for low‑resource languages (Wassim Hamidouche) Creation of African voice datasets and open‑weight models for edge deployment (Aisha Walcott‑Bryant)
Both speakers note that many languages lack sufficient training data and evaluation benchmarks, and that building high-quality, locally-collected datasets (e.g., the 27-language voice set) is a key step toward usable low-resource language models. [185-200][61-63][144-145]
POLICY CONTEXT (KNOWLEDGE BASE)
The low-resource language crisis and the need for dedicated datasets have been highlighted in policy dialogues on language equity and connectivity for billions offline [S58][S64][S55].
AI as a catalyst for development goals, poverty reduction, and job creation
Speakers: Illango Patchamuthu, Alpan Rawal, Zameer Brey, Wassim Hamidouche
AI as a catalyst for job creation and economic development, not just automation (Illango Patchamuthu) Small AI as data‑efficient, edge‑ready, context‑specific models … meaningful to underserved communities (Alpan Rawal) AI must reduce inequality … (Zameer Brey) AI for Good labs … solving real world problems with societal impact (Wassim Hamidouche)
The panel concurs that AI, particularly small, context-aware models, should be leveraged to advance social and economic development, reduce inequality, and generate employment rather than merely automate tasks. Illango frames AI as job-creation, Alpan defines its relevance to underserved populations, Zameer stresses inequality reduction, and Wassim describes AI-for-Good initiatives. [243-246][15-19][29-33][71-79]
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple reports link AI to Sustainable Development Goals, poverty mapping, and economic transformation while noting potential job displacement concerns [S44][S45][S46][S47][S55].
Similar Viewpoints
Both argue that AI platform competition should not be framed as a zero‑sum ‘winner‑takes‑all’ scenario; the vast offline population ensures space for many solutions tailored to local contexts. [386-393][394-395]
Speakers: Alpan Rawal, Illango Patchamuthu
Healthy competition among platforms; relevance determined by local context rather than a single “winner” (Alpan Rawal) Large offline population leaves room for multiple AI platforms to coexist (Illango Patchamuthu)
Both emphasize open, partnership‑driven data initiatives to create resources for low‑resource languages, highlighting the need for community involvement and funding to build usable AI models. [60-64][144-145][218-229]
Speakers: Aisha Walcott‑Bryant, Wassim Hamidouche
Problem‑first, partnership‑driven approach to building small AI solutions (Aisha Walcott‑Bryant) Lingua Europe/Africa initiatives funding language data collection and supporting community hubs (Wassim Hamidouche)
Both stress that health‑focused AI must be highly reliable, run on low‑cost offline devices, and serve as decision‑support tools rather than autonomous systems. [160-165][166-174][304-306][333-339]
Speakers: Zameer Brey, Antoine Tesniere
Need for verifiable, zero‑error AI to support community health workers (Zameer Brey) Offline, low‑power edge devices needed; AI should augment, not replace, human decisions (Antoine Tesniere)
Both view small AI as a purposeful tool for development and poverty alleviation, emphasizing its suitability for underserved populations. [120-124][15-19]
Speakers: Illango Patchamuthu, Alpan Rawal
Small AI is not inferior; it serves development goals and poverty reduction (Illango Patchamuthu) Small AI as data‑efficient, edge‑ready, context‑specific models … meaningful to underserved communities (Alpan Rawal)
Unexpected Consensus
Multiple AI platforms can coexist because billions remain offline, countering a ‘winner‑takes‑all’ narrative
Speakers: Alpan Rawal, Illango Patchamuthu
Healthy competition among platforms; relevance determined by local context rather than a single “winner” (Alpan Rawal) Large offline population leaves room for multiple AI platforms to coexist (Illango Patchamuthu)
While Alpan framed competition as healthy and context-driven, Illango explicitly noted the massive offline population as evidence that many platforms can serve different needs simultaneously-an alignment that was not anticipated given their different institutional perspectives. [386-393][394-395]
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of offline populations and historical precedence of open ecosystems support the argument that a pluralistic AI landscape is viable and desirable [S55][S57][S64].
Overall Assessment

The panel displayed strong consensus on five core themes: (1) small AI must be lightweight, data‑efficient and locally tailored; (2) collaborative, open‑source and partnership‑driven development is essential; (3) trustworthiness and human‑in‑the‑loop design are non‑negotiable, especially in health; (4) data scarcity for low‑resource languages requires dedicated datasets and community involvement; (5) AI should be leveraged as a development catalyst for poverty reduction and job creation.

High consensus – most speakers reiterated overlapping principles despite diverse institutional backgrounds, indicating a shared vision that small, context‑aware, trustworthy AI, built through partnerships, can advance development goals across sectors.

Differences
Different Viewpoints
Level of acceptable error and need for verifiable, zero‑error AI in critical health applications
Speakers: Zameer Brey, Antoine Tesniere, Illango Patchamuthu
Need for verifiable, zero‑error AI (Zameer Brey) Acceptable error levels; models need not be 99.999 % accurate as long as they improve over current practice (Antoine Tesniere) AI must be trustworthy and avoid hallucinations; otherwise community trust is lost (Illango Patchamuthu)
Zameer argues that AI tools used by community health workers must be virtually error-free and auditable, turning black-box systems into transparent, verifiable models [160-165][166-174]. Antoine counters that while perfect accuracy is unrealistic, models that are better than existing clinician performance are sufficient, noting that current systems are not 99.999 % accurate but still outperform the status quo [308-311]. Illango adds that any failure erodes community trust, emphasizing reliability over absolute zero error [263-267]. The speakers thus disagree on how close to zero error AI must be before deployment.
POLICY CONTEXT (KNOWLEDGE BASE)
WHO roundtables and health-trust discussions stress that life-critical AI must aim for zero risk, contrasting with more permissive error tolerances in other domains [S48][S49][S50][S61].
Approach to developing low‑resource language models: building from scratch vs leveraging multilingual base models and data augmentation
Speakers: Wassim Hamidouche, Audience (implicit expectation of bespoke models)
Selecting appropriate multilingual base models and using monolingual, bilingual, and translated data is essential; training from scratch is infeasible for low‑resource languages (Wassim Hamidouche) Audience question implies curiosity about practicality of using large open‑source models for specific under‑resourced languages, hinting at expectation of building dedicated models
Wassim stresses that training LLMs from scratch for low-resource languages is not possible due to data scarcity, and therefore the focus should be on choosing the right multilingual base model and augmenting it with various data sources [367-375]. The audience’s question about the technical implications of using open-source large language models for under-resourced languages suggests an alternative view that such models could be directly adapted or built anew, revealing a tension between building bespoke models versus adapting existing ones.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on whether to create bespoke low-resource models or adapt multilingual bases have been framed within the broader low-resource language crisis discourse [S58][S52].
Unexpected Differences
Perception of competition among AI platforms versus the reality of a massive offline population
Speakers: Alpan Rawal, Audience
Healthy competition is beneficial and not a zero‑sum game (Alpan Rawal) Which platform will win the AI wars? (Audience)
Alpan explicitly frames platform competition as non-zero-sum, emphasizing multiple solutions for diverse contexts [386-393]. The audience’s direct question about a single platform ‘winning’ introduces an unexpected tension, suggesting some stakeholders still view AI development through a winner-takes-all lens, contrary to the panel’s inclusive stance.
POLICY CONTEXT (KNOWLEDGE BASE)
The mismatch between competition narratives and the existence of billions offline is documented in analyses of open versus closed AI ecosystems and connectivity gaps [S55][S57][S64].
Assumption that small AI models are inherently inferior versus the claim that they are equally valuable
Speakers: Illango Patchamuthu, General perception (implicit)
Small AI is not inferior; it can solve problems and fast-track development outcomes (Illango Patchamuthu) Implicit bias that small AI is ‘second class’ (reflected in the need to explicitly refute it) [120-124]
Illango feels compelled to state that small AI is not second‑class, indicating an unexpected underlying belief among some participants that small, non‑foundation models are less valuable—a perception not directly voiced by other speakers but revealed through his defensive clarification.
POLICY CONTEXT (KNOWLEDGE BASE)
Thought leadership argues that small models can be strategically superior for many tasks, challenging the notion of inherent inferiority [S53][S54][S55].
Overall Assessment

The panel largely converged on the principle that AI must be context‑specific, affordable, and open to serve underserved communities. However, clear disagreements emerged around the required level of reliability for health‑care AI (zero‑error vs acceptable error) and the optimal technical route for low‑resource language models (building from scratch vs adapting multilingual bases). Additional unexpected tensions concerned platform competition narratives and lingering doubts about the value of small AI.

Moderate – while participants share a common vision of socially impactful small AI, the debates on safety thresholds and model development strategies reveal substantive gaps that could affect implementation timelines and policy choices.

Partial Agreements
All speakers concur that AI should be tailored to local contexts and serve underserved communities, but they diverge on the primary pathway: Zameer stresses local relevance over benchmarks, Aisha emphasizes problem‑first partnership and open data, Wassim highlights open‑source domain solutions, Illango focuses on scaling through KPIs and job creation, and Antoine points to data‑efficient algorithms for scarce health data [15-19][29-34][44-48][82-97][120-124][280-286].
Speakers: Alpan Rawal, Zameer Brey, Aisha Walcott‑Bryant, Wassim Hamidouche, Illango Patchamuthu, Antoine Tesniere
Small AI should be context‑specific and deliver social impact (Alpan Rawal) AI must fit local use‑cases rather than generic benchmarks (Zameer Brey) Problem‑first, partnership‑driven approach to building solutions (Aisha Walcott‑Bryant) Open‑source, domain‑specific small AI for biodiversity and wildfire monitoring (Wassim Hamidouche) Small AI is not inferior; it can fast‑track development outcomes (Illango Patchamuthu) Data‑efficient, hardware‑integrated AI models are needed for healthcare (Antoine Tesniere)
All three agree on the importance of open resources to empower local ecosystems, yet differ in focus: Wassim discusses funding initiatives (Lingua Africa) for language data collection [218-229], Aisha describes concrete open‑weight models and voice datasets for Africa [61-65][144-145], while Illango promotes a broader AI use‑case repository for multiple sectors [258-262].
Speakers: Wassim Hamidouche, Aisha Walcott‑Bryant, Illango Patchamuthu
Open‑source models and datasets enable local innovation (Wassim Hamidouche) Open‑weight models and multilingual voice datasets are released for edge deployment (Aisha Walcott‑Bryant) Publicly accessible AI use‑case repository supports community adoption (Illango Patchamuthu)
All emphasize capacity building, but Illango stresses systemic literacy and private‑sector ecosystems, Aisha focuses on co‑creation with academic partners, and Antoine highlights technical data‑efficiency for health applications [350-353][38-40][280-286].
Speakers: Illango Patchamuthu, Aisha Walcott‑Bryant, Antoine Tesniere
Digital literacy, skilling, and STEM education are essential for AI uptake (Illango Patchamuthu) Partnership‑led co‑creation with local stakeholders ensures relevance (Aisha Walcott‑Bryant) Data‑efficient algorithms that work on small devices are needed due to scarce data (Antoine Tesniere)
Takeaways
Key takeaways
Small AI is defined as data‑efficient, edge‑ready, context‑specific models that are cheap to run and tailored to local needs rather than generic benchmarks. A problem‑first, partnership‑driven approach (e.g., Google Research Africa) is essential for building impactful small AI solutions. Open‑source, domain‑specific small AI (e.g., Microsoft’s SPARO for biodiversity and Alert California for wildfire detection) can be deployed globally and locally. Validated small AI models already exist in healthcare (radiology, dermatology, ophthalmology) and can run on low‑cost hardware, often offline. Small AI is not inferior; it is a strategic tool for poverty reduction, job creation, and achieving development goals. Low‑resource languages face data scarcity, lack of benchmarks, performance gaps, and safety/alignment challenges; targeted data collection and multilingual base models are needed. Trustworthiness, reliability, and auditable (glass‑box) AI are critical for community adoption, especially in health and agriculture. Scaling pilots requires clear KPIs, replication frameworks, and ecosystem building (private‑sector, digital public infrastructure, skilling). Healthy competition among platforms is encouraged; relevance is determined by local context, not a single “winner”.
Resolutions and action items
World Bank to host and maintain an open‑access AI use‑case repository (≈100 cases) and will allow vetted submissions. Microsoft’s Lingua Africa initiative will allocate $5.5 million to fund data collection for African languages, building on the Lingua Europe model. Google Research Africa will continue releasing open‑weight models (e.g., Gemma) and multilingual voice datasets to enable edge deployment. Participants agreed to prioritize domain‑specific data collection (health, agriculture, education) rather than generic large‑scale corpora. Commitment to make SPARO and Alert California solutions open‑source for broader adoption. World Bank emphasized the need for digital literacy, STEM education, and up‑skilling to support small‑AI ecosystems.
Unresolved issues
How to achieve truly zero‑error, verifiable AI for critical health interventions remains an open challenge. Specific methods for ensuring safety and alignment of LLMs in low‑resource languages are still under development. Scalable pathways for moving successful pilots to national‑level deployments without loss of reliability are not fully defined. Technical feasibility of training high‑quality LLMs from scratch for under‑resourced languages versus adapting multilingual models needs further research. Strategies for handling noisy, fragmented data (e.g., Indian health datasets) were discussed but not resolved. The long‑term impact of large platform competition on small‑AI adoption was raised without a definitive answer.
Suggested compromises
Combine large multilingual base models with bilingual or monolingual fine‑tuning to boost low‑resource language performance. Use AI as decision‑support (human‑in‑the‑loop) rather than full automation, especially in healthcare. Open‑source the core models and tools so multiple platforms can build on them, reducing duplication of effort. Balance investment between cutting‑edge large‑scale research and edge‑ready small AI solutions to meet both global and local needs.
Thought Provoking Comments
Would anyone, given the traffic in Delhi, design something as big as an aeroplane to get across the city? No – we would design something smaller, faster, cheaper, that gets us from point A to B.
Uses a vivid, everyday analogy to illustrate why AI solutions for low‑resource contexts must be lightweight and context‑aware, challenging the default assumption that bigger models are always better.
Set the tone for the panel’s focus on ‘small AI’. It prompted other speakers (e.g., Aisha and Illango) to frame their work around efficiency and locality, steering the conversation away from generic foundation‑model hype toward concrete, resource‑constrained design.
Speaker: Zameer Brey
If there’s a red button you can press that solves a problem with a single binary decision, we don’t need AI at all. We must be very thoughtful about the type of problem before we bring AI in.
Introduces a ‘problem‑first’ mindset, reminding the panel that AI should be a tool, not a solution in search of a problem, and that simplicity can trump sophistication.
Shifted the discussion from technology showcase to a more disciplined, needs‑driven approach. It led to deeper discussion about co‑creation with communities and the importance of open data sets for voice languages.
Speaker: Aisha Walcott‑Bryant
We need models that have zero error – a verifiable AI that shifts from a black box to a glass box, where the logic chain can be audited and repeated.
Raises the critical issue of reliability and transparency in AI for health, moving beyond performance metrics to ethical accountability, especially in high‑stakes settings.
Prompted a follow‑up from Antoine about the balance between model accuracy and human oversight, and from Illango about the necessity of trust for community adoption. It deepened the conversation around safety and verification.
Speaker: Zameer Brey
Low‑resource languages make up only a tiny fraction of internet data; most have no benchmarks, and safety alignment is done almost exclusively in English. We’re targeting pilot languages and launching initiatives like Lingua Africa to collect data and close the gap.
Provides a concrete, data‑driven diagnosis of why foundation models underperform for many languages, and outlines a strategic, collaborative response, highlighting systemic challenges rather than technical shortcuts.
Steered the panel toward concrete actions (data collection, domain‑specific fine‑tuning) and inspired Illango to reference the World Bank’s funding for similar efforts, reinforcing the theme of coordinated, community‑led solutions.
Speaker: Wassim Hamidouche
Small AI is not inferior or second‑class. When pilots work, we must scale them with the right KPIs, otherwise the sheen wears off and the community loses trust.
Counters a common bias that smaller models are inherently weaker, emphasizing the importance of scalability, measurement, and sustained impact for development outcomes.
Reinforced the panel’s advocacy for small AI, leading to a discussion on replication across regions (e.g., Uttar Pradesh, Maharashtra) and prompting Alpan to ask for practical recommendations for scaling.
Speaker: Illango Patchamuthu
In healthcare we already have validated small‑AI tools for radiology, dermatology, ophthalmology that run on cheap computers and can even work offline.
Shows that small AI is not a future aspiration but a present reality, providing concrete examples that ground the abstract discussion in real‑world deployments.
Validated the panel’s premise, encouraged other speakers to cite existing deployments (e.g., SPARO, Alert California) and shifted the conversation toward operational considerations like offline capability and hardware integration.
Speaker: Antoine Tesniere
Job creation is the North Star. AI must support and enhance jobs rather than automate them away, and that requires a local ecosystem powered by private sector and digital public infrastructure.
Links AI deployment to broader economic development goals, expanding the conversation from technical feasibility to macro‑level policy and ecosystem design.
Introduced a new dimension—economic impact—prompting the audience question about youth capacity building and leading to a brief digression on digital literacy and STEM education.
Speaker: Illango Patchamuthu
Healthy competition among platforms is not a zero‑sum game; the relevance to the end‑user’s context matters more than which platform ‘wins’ the AI wars.
Addresses a provocative audience question with a nuanced perspective, reframing competition as a driver for innovation rather than a battle, and emphasizing user‑centric outcomes.
Defused a potentially polarising debate, brought the focus back to collaboration, and set the stage for concluding remarks that highlighted partnership across Google, Microsoft, and the World Bank.
Speaker: Alpan Rawal
Overall Assessment

The discussion was shaped by a series of pivotal remarks that repeatedly redirected the conversation toward context‑aware, resource‑efficient AI. Early analogies and problem‑first framing forced the panel to move beyond hype about large foundation models, while concrete examples of existing small‑AI deployments (in health, biodiversity, weather forecasting) grounded the dialogue in reality. Repeated challenges around reliability, language equity, and scalability introduced complexity and prompted actionable suggestions—data collection initiatives, domain‑specific fine‑tuning, and ecosystem building. Together, these comments forged a narrative that small AI is not a compromise but a strategic, ethical, and practical pathway for delivering social impact in underserved communities.

Follow-up Questions
How can large foundation models be adapted for devices with patchy internet connectivity and limited data in rural African communities?
Critical to ensure AI benefits reach underserved areas where connectivity and data availability are constraints.
Speaker: Alpan Rawal (question to Aisha Walcott-Bryant)
What strategies should developers of small language or domain‑specific models follow to ensure effectiveness, especially in healthcare and other sectors?
Provides guidance for building reliable, efficient models that can operate in low‑resource environments.
Speaker: Alpan Rawal (question to Wassim Hamidouche)
How can AI systems be made verifiable or “glass‑box” to reduce critical errors, such as in maternal health diagnostics?
Transparency and auditability are essential for safety and trust in high‑stakes applications.
Speaker: Zameer Brey
How can the lack of benchmarks and safety alignment for low‑resource languages in large language models be addressed?
Without proper evaluation metrics and safety measures, models may perform poorly or unsafely for many languages.
Speaker: Wassim Hamidouche
What approaches are needed for domain‑specific, application‑specific data collection to improve model performance for low‑resource languages?
Targeted data is required to close performance gaps and ensure models are useful for specific use‑cases.
Speaker: Wassim Hamidouche
How can successful small‑AI pilots be scaled to larger populations while maintaining reliability and community trust?
Scaling pilots is necessary to achieve broader impact, but must avoid loss of effectiveness and trust.
Speaker: Illango Patchamuthu
What are the key components of building an AI ecosystem in emerging economies, including digital public infrastructure and private‑sector involvement?
An ecosystem is needed to create jobs and enable sustainable AI deployment in developing regions.
Speaker: Illango Patchamuthu
How can AI models be designed to avoid hallucinations and ensure trustworthy outputs for low‑resource users such as farmers?
Preventing erroneous AI advice is vital for adoption and avoiding negative consequences in vulnerable communities.
Speaker: Illango Patchamuthu
What are the technical implications and practicality of using open‑source, open‑weight large language models for low‑resource language domains?
Understanding feasibility helps determine if open models can effectively serve under‑represented languages.
Speaker: Selena (audience) to Wassim Hamidouche
How does AI diagnostic accuracy compare to average clinicians, and how can AI be combined with human expertise for optimal outcomes?
Evaluating AI vs. human performance informs deployment strategies and highlights the value of hybrid decision‑making.
Speaker: Antoine Tesniere (in response to Alpan’s follow‑up)
How can noisy, large‑scale data (e.g., from India) be cleaned and prepared for robust AI analysis?
Data quality directly impacts model reliability; methods for handling noise are needed.
Speaker: Alpan Rawal (to Antoine Tesniere)
What initiatives are needed to improve digital literacy, upskilling, and STEM education to increase AI capacity among youth in agricultural sectors?
Building a skilled workforce is essential for leveraging AI to boost agricultural productivity and economic inclusion.
Speaker: Audience member (Irish Kumar) to Illango Patchamuthu
How can the World Bank’s AI Repository be made fully open access while addressing legal and quality‑control challenges?
Open access promotes collaboration, but legal and curation issues must be resolved for effective use.
Speaker: Illango Patchamuthu
How can offline large language models be developed to answer key health questions in low‑ and middle‑income countries?
Edge‑native AI can provide critical health information where connectivity and compute resources are limited.
Speaker: Antoine Tesniere

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

From KW to GW Scaling the Infrastructure of the Global AI Economy

From KW to GW Scaling the Infrastructure of the Global AI Economy

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed how India can achieve AI sovereignty while leveraging global technology partners, emphasizing the country’s ambition to become a hub for AI development and questioning whether the current infrastructure can support a sovereign AI ecosystem [1][2][3][4][5][6]. Google outlined its strategy of building large data centers in India, such as the new Vizag facility, to keep data residency and AI services within national borders [8]. To address critical data security, Google offers an indigenous “data box” that provides full Gemini AI capabilities on-premises, giving customers control over both software and hardware [9-14]. Sudeesh explained that the Indian railway ticketing platform IRCTC faces extreme demand spikes, and AI-driven anti-bot systems are deployed to mitigate automated abuse during peak tatkal bookings [16-19]. These systems combine globally sourced AI models with locally developed layers and startup analytics, creating a hybrid solution that leverages Indian expertise [21-25]. Ankush described Bharat GPT’s “AI with purpose and trust” approach, focusing on domain-specific, smaller models trained on partner data rather than large consumer-grade LLMs [28-38]. He emphasized that enterprises such as IRCTC already understand their domain, making collaborative model training more effective than building generic solutions [39-44]. Nitin addressed concerns about digital inclusion, noting that Google has made Gemini-powered JEE mock exams freely available to students, exemplifying efforts to democratize AI access [58-62]. Jigar and Peter outlined the concept of “AI factories,” where speed at scale is achieved by designing GPU-centric pods, using reference designs that support multiple GPU generations and liquid-cooling to maximize utilization [101-108][124-130]. They warned that traditional data-center designs are being replaced by megawatt-per-rack architectures, with future racks projected to reach 250-500 kW and eventually 1 MW, demanding new power and cooling strategies [162-170][586-590]. Srikanth and Sanjay stressed the need for future-proof designs that consider row-level density and modular pods, allowing infrastructure to accommodate two to three IT refresh cycles within a single power-and-cooling plant [665-672][677-684]. To support rapid deployment and skilled operation, Vertiv is investing in prefabricated systems, training programs with Indian institutes, and NVIDIA-ready certification to accelerate AI-centric data-center roll-outs [751-758][780-788]. The discussion concluded that coordinated efforts in sovereign AI models, inclusive services, and scalable, energy-efficient data-center architectures are essential for India to become a global AI hub [1][58][101][665].


Keypoints

Major discussion points


AI sovereignty and the need for India-centric infrastructure – Google is building large data-centers in Vizag and offering an “indigenous data-box” that lets customers run Gemini AI services on-premise, keeping data residency and hardware control inside India [8-14]. Nitin also highlighted Google’s effort to make AI tools accessible to underserved users, citing free Gemini-powered JEE-Main mock exams as an example of inclusive outreach [58-62].


Deploying AI for Indian public services and the Bharat GPT vision – IRCTC’s ticketing platform uses advanced AI (including indigenous layers and startup collaborations) to combat automated misuse during peak booking windows [16-25]. Ankush explained that Bharat GPT follows a “purpose-and-trust” model, building domain-specific, smaller language models together with enterprise partners rather than a single massive consumer-facing LLM [28-38].


Scaling AI compute through GPU-centric “AI factories” – Peter and Jigar repeatedly stressed “speed at scale,” advocating a design-first approach that starts from the GPU chip and builds standardized pods/reference designs that can be replicated across gigawatt-scale data centres [106-121][245-255]. They described the rapid evolution of rack power density (from 10 kW to >200 kW per rack) and the need for liquid-cooling, modular pods, and “AI-factory” mindsets to meet exploding demand [158-163][170-176][190-199].


Energy efficiency, PUE optimisation and sustainable cooling – Sanjay and Srikanth discussed the challenges of maintaining low PUE in India’s varied climate, noting that simply raising ambient temperature can give a misleading PUE improvement [710-718]. They advocated adaptive cooling strategies (free-cooling in winter, DX chillers in summer) and the integration of chip-to-data-center telemetry to optimise power use automatically [724-732][739-747].


Building a skilled ecosystem to support rapid AI-infrastructure growth – Both Google/Vertiv and NVIDIA highlighted training programmes (e.g., 8-12 week courses with IIT-Chennai) and partnerships with colleges to develop operations, design, and engineering talent needed for AI-factory deployments [779-788][811-818].


Overall purpose / goal of the discussion


The panel aimed to map out how India can achieve AI sovereignty by developing home-grown models, data-center infrastructure, and talent pipelines, while ensuring inclusivity, sustainability, and rapid scalability. Speakers shared concrete initiatives (indigenous hardware, AI-factory designs, energy-efficient cooling, and skill-up programs) to position India as a global AI hub that can both consume and produce advanced AI services.


Overall tone and its evolution


– The conversation opened with an optimistic, visionary tone, emphasizing India’s aspirational role in AI [1-3][28-31].


– It then shifted to a technical, solution-focused tone, detailing specific AI deployments (IRCTC, Bharat GPT) and infrastructure designs (GPU pods, liquid cooling) [16-25][106-121].


– Mid-session the tone became pragmatic and collaborative, addressing real-world challenges such as energy efficiency, PUE metrics, and the need for future-proof designs [710-718][739-747].


– Towards the end, the tone turned forward-looking and supportive, highlighting skill-development programmes and partnership opportunities to sustain the rapid build-out [779-788][811-818].


Overall, the discussion maintained a constructive and solution-oriented atmosphere, moving from high-level ambition to detailed implementation strategies and ending with a call to collective action.


Speakers

Srikanth Cherukuri – (role/title not explicitly stated in the transcript; associated with Vertiv and AI-factory discussions) [S1]


Audience – (generic audience members)


Ankush Sabharwal – (speaker, likely representing Bharat GPT)


Peter Panfil – (speaker; senior executive participating in the fireside chat)


Jigar Halani – Former Solution & Engineering Manager for India at NVIDIA [S11]


Moderator – (session moderator)


Srirang Deshpande – Strategy lead for India, managing Vertiv strategy and market development [S16]


Sanjay Kumar Sainani – Senior Vice President, Technical Business Development at Vertiv [S18]


Sudeesh VC Nambiar – (speaker representing IRCTC’s AI/ML initiatives)


Akanksha Swarup – Moderator/Host conducting interviews and panel discussions [S24]


Nitin Gupta – Google representative speaking on Google’s AI and data-center strategy [S28]


Additional speakers:


(None identified beyond the listed names)


Full session reportComprehensive analysis and detailed insights

The session opened with Srirang Deshpande introducing the Vertiv-NVIDIA “AI-Factories” fireside chat, framing the discussion around a shift from traditional data-centres to purpose-built, gigawatt-scale AI factories that centre on GPU-centric design and the Vertiv-NVIDIA partnership [90-100].


The opening statement set an ambitious tone: India should achieve “complete sovereignty in terms of AI and not just the platform” and, leveraging its readiness to adopt new technology for citizens and businesses, can become “the hub of AI development for the world” within months rather than years [1-3][4-6]. This hopeful framing guided the subsequent technical dialogue.


Google’s sovereign-innovation approach


Nitin Gupta (Google) argued that sovereignty and innovation are not mutually exclusive but must progress together [15]. He highlighted Google’s expanding Indian data-centre footprint, including a new large-scale facility in Vizag, to keep data residency and AI workloads inside national borders [8]. For highly sensitive data, Google offers an on-premises “Data Box” that runs the full Gemini AI stack locally, giving customers control over both software and the underlying hardware [9-14]. This product exemplifies the sovereign-innovation blend.


Inclusive AI initiatives


Nitin noted that Google is using Gemini to provide free JEE-Main mock examinations, making advanced AI tools available to students regardless of socioeconomic status [58-62]. Jigar Halani reinforced the inclusive mission with the slogan “AI for all” [115-120].


Public-service AI use case


Sudeesh VC Nambiar described the severe demand-supply mismatch on the IRCTC ticketing platform during peak “tatkal” windows, where automated bots attempt to secure bookings [16-18]. He confirmed that IRCTC employs “very advanced AI solutions” – among the best globally – to detect and mitigate these attacks [19-20]. A distinct “indigenous” layer, built with Indian startups that monitor social media and perform data analysis, augments the global AI stack [21-25]. This hybrid approach shows how domestic expertise can be layered onto world-class technology.


Bharat GPT – purpose-driven sovereign models


Ankush Sabharwal outlined the vision for Bharat GPT, positioning it as “AI with purpose and trust” [28]. He stressed a problem-first methodology: define the use-case, select an appropriately sized model, then source data [30-34]. Bharat GPT is a family of domain-specific models trained on partner data, enabling enterprises such as IRCTC to leverage AI without building generic solutions from scratch [35-44][45-46]. This strategy aligns with the sovereign aim of keeping data and model training within India while avoiding the “existential crisis” of valuation-driven AI projects [44-46].


AI-factory architecture and “speed at scale”


Peter Panfil and Jigar Halani argued that infrastructure design must start from the GPU chip and expand outward, creating modular GPU pods that can be replicated across gigawatt-scale data-centres [106-121][124-130]. They highlighted the rapid increase in rack power density-from 10 kW to over 200 kW per rack, with future designs targeting 250-500 kW and even 1 MW per rack [158-163][170-176][190-199]. Liquid-cooling, pod-level reference designs and the ability to mix GPU generations within a pod were presented as essential to maximise utilisation and accelerate deployment [255-261][259-267]. The panel repeatedly stressed the need for rapid, large-scale deployment of GPU-centric AI factories.


Jigar added that the recent Data Protection and Data Privacy (DPDP) law will force more compute to stay inside India, pushing the country past the 10 GW mark [150-155].


Infrastructure standards & certifications


Sanjay Kumar Sainani warned that PUE can be superficially improved by raising inlet temperature, which reduces cooling load but may increase overall IT power consumption, making the metric potentially misleading [710-718]. He advocated climate-aware cooling strategies-leveraging free-cooling in winter and adaptive DX chillers in summer-to optimise annual PUE across India’s diverse temperature range [724-732]. Srikanth Cherukuri added that true efficiency will come from integrating chip-level telemetry with data-centre controls, enabling automated power optimisation without manual intervention [739-747].


Srikanth also described NVIDIA-ready (DGX-ready) data-centre certifications, which specify water-temperature, port-size and telemetry requirements and outline how partners are being enabled [720-735]. He referenced NVIDIA’s DSX reference design and the use of digital-twin simulation to validate pod-level layouts before construction, tools that help achieve “future-proof” designs [640-655].


Future-proofing the AI-factory architecture


Srikanth urged designers to move beyond rack-density metrics and adopt a “row-level” or “data-hall-level” bounding-box approach, ensuring that a single pod design can accommodate multiple future GPU generations without costly retrofits [633-658][665-669]. Sanjay reinforced this by noting the mismatch between the rapid three-to-five-year refresh cycle of compute hardware and the ten-to-fifteen-year lifespan of power and cooling infrastructure [672-677]. He proposed modular pods (e.g., 2.4 MW or 6 MW) that can be re-configured for newer GPUs while keeping the surrounding plant unchanged [693-698].


Sanjay also highlighted the scale of current data-centre projects: a 30 MW data-centre with 5 MW per hall, and that a megawatt-per-rack design is “not too far” [560-570][580-590]. He pointed out the cost mismatch, noting that a $100 M data-centre may house $2 B of GPUs, stressing the need for “speed to token” and fast ROI [590-605].


Talent development


Vertiv and NVIDIA highlighted skill-development initiatives, including an eight-to-twelve-week programme with IIT-Chennai to train engineers in data-centre operations and maintenance [779-788]. Prefabricated pod systems and reference designs are intended to lower the expertise barrier for rapid AI-factory roll-out [811-818], creating a pipeline of DC-ops, design, and engineering professionals to support exponential AI-infrastructure growth.


Audience Q&A


During the Q&A, an audience member asked whether AI could develop its own subconsciousness; Peter responded with an analogy to breathing and blinking, emphasizing that AI is an assistive tool rather than a sentient entity [720-735].


Key disagreements


Three notable tensions emerged:


1. Speed-at-scale vs. future-proof modularity – Peter pushed for rapid deployment of GPU pods, while Srikanth cautioned that designs must be future-proof using bounding-box and modular pod strategies.


2. PUE as a metric vs. telemetry-driven optimisation – Sanjay suggested PUE can be gamed by raising ambient temperature, whereas Srikanth argued true efficiency requires telemetry-driven optimisation.


3. Hybrid vs. fully indigenous AI stacks – Sudeesh described a hybrid stack that combines global models with an indigenous layer, while Ankush advocated for wholly domain-specific, partner-trained models (Bharat GPT).


These disagreements reflect differing priorities: immediacy versus long-term adaptability, simplistic metrics versus real-world efficiency, and the balance between leveraging global technology and building fully indigenous solutions.


Key take-aways


(i) India aims to become a global AI hub through sovereign, purpose-driven models such as Bharat GPT.


(ii) Google’s data-centres and on-premises Data Box demonstrate how sovereignty can coexist with cutting-edge innovation.


(iii) Inclusive AI initiatives, exemplified by free Gemini-powered JEE mock exams and the “AI for all” mantra, are central to the agenda.


(iv) GPU-centric pods and reference designs constitute the strategic pathway to achieving rapid, large-scale AI deployment.


(v) Design methodology must start from the chip and adopt modular, row-level density planning to remain future-proof.


(vi) Energy efficiency requires climate-aware cooling and integrated telemetry rather than sole reliance on PUE.


(vii) A robust talent pipeline, supported by academic-industry programmes and prefabricated systems, is essential.


(viii) Collaboration between global vendors and Indian startups/government is the linchpin for rapid, sovereign AI deployment.


The discussion illustrated how India’s AI ambition intertwines sovereign data policies, inclusive access, cutting-edge infrastructure, and a growing talent ecosystem, while also surfacing critical trade-offs that will shape the nation’s AI-factory roadmap.


Session transcriptComplete transcript of the session
Ankush Sabharwal

having the complete sovereignty in terms of AI and not just the platform. I think India being so aspirational and ready to adopt new technology for the welfare of themselves and the welfare of the businesses, I think we would be the hub of AI development for the world. You will start seeing that happening in a few months, not years.

Akanksha Swarup

It’s actually heartwarming to hear that from someone who’s actually fronting India’s AI story at the moment. Nitin, as someone who is at Google, how do you see this for India? Do you think India has the right infrastructure, the right resources to build its own sovereign AI at the moment?

Nitin Gupta

First of all, thank you, Corvo team, Ankush, for inviting me here. And, you know, I’ll be very happy to share my views. from Google perspective and from my personal perspective I feel yes sovereignty is very important but at the time with the sovereignty it is not a question between sovereignty or innovation it is sovereignty and innovation they have to run together they can’t be one choice versus the other one and with that Google while we have our entire data centers in India you have heard three months back we announced that we are going to be building big data centers in Vizag the announcement happened so we are ensuring that let’s say if any innovation and any data residency things are there they are being kept within the boundaries of India But then those data centers are definitely empowering the lot of AI, but they are for everyone, for all type of personas, whether they’re government, enterprises, startups, students, colleges, universities.

We understand that, you know, sometime there are going to be critical data which needs to stay even more secure. And for that, Google has created a completely indigenous data box which completely stays inside the customer premise and is fully powered by AI. So imagine that you have the full potential to run what you’re running in a Google data center, but inside your own premise. And that has full Google Gemini AI services. And that’s the definition we have for sovereignty, where you are. Also. Also controlling the hardware, not only what’s running on that hardware.

Akanksha Swarup

All right. So this IRCTC is one of the most heavily used websites in India. my data research says close to 50 million users visiting every month on an average correct me if I am wrong but how are you incorporating or leveraging the use of AI especially in peak periods when you look at say tatkal booking time when the traffic actually dramatically peaks up

Sudeesh VC Nambiar

yeah so we had tremendous mismatch of demand and supply as far as railway ticketing is concerned so we have the peak the morning 8 o ‘clock when the ticket is opened for the 60 days hence travel then 10 o ‘clock the AC tatkal and 11 o ‘clock the sleeper tatkal so there is a lot of huge demand and there is a demand supply mismatch as of today so people try to misuse and use the automated tools for accessing it So this is a constant, I would say, cat and mouse game we are sort of playing. And we are using AI also. We are using AI of very advanced AI solution. Maybe they are said to be the best in the world solution we are using

Akanksha Swarup

Any indigenous models are used?

Sudeesh VC Nambiar

Indigenous, of course, we have a layer of indigenous. There is a startup also who are doing the analysis, data analysis, and they constantly monitor the social media and see what is happening, what is the strategy. So it is basically a collaboration between the Indian startups and the global technology strength of a global company. So we are using AI and ML -based model. The model constantly learns and tries to… mitigate those automated…

Akanksha Swarup

Okay. Ankush, what differentiates Bharat GPT in terms of its vision when you compare it to say global models like ChatGPT or even Gemini and especially how is it curated for Indian citizens and enterprises? What is that differentiating factor?

Ankush Sabharwal

Yeah, see, our tagline is AI with purpose and trust, right? So whatever we are doing, so I had read that book, Seven Habits of Highly Effective People very early on in my career. So begin with end in the mind. We always think what’s the use case? What’s the problem you’re going to solve? And then see what kind of model you need, tiny, small, medium, large. And then you see, okay, from where the data would come. See, the Bharat GPT family of models, right? It’s not the large language model. It’s not ready for consumers yet, right? So we work with our partners, get their data and train the model for their users because we believe we…

is easy for us to solve the problem of enterprises because the enterprises say like IRCTC, they already know their domain. We cannot learn, right? And if you say, hey, I can create travel AI solutions, very, very difficult, right? So they know travel, they know railways. So it would be, I think, much better to work with them, learn from them. They already know, they are already solving a lot of problems and they also know the problem, the real problem. They don’t have existential crisis, right? So they are not just in the game of valuation. So they are solving the real world problem.

Akanksha Swarup

That’s why we have him on stage with you today. He’ll share those precious tips. My last question, since we are running short of time, Nitin, I think this is also not to highlight the achievements, it’s also to perhaps highlight the concerns. And right now, one concern which the Indian Prime Minister has also highlighted is that of inclusivity. How is Google trying to bridge that divide as far as you can see? As far as digital divide is concerned, how do you make Google more accessible for the underprivileged, for those in rural areas? Nathan, before you answer, it should be shortened. I have my colleagues from other team, Vertex. I would like to apologize to them for this delay, but allow us just to wind this up.

Nitin Gupta

Yeah, I’ll take a minute. Okay. So, great question. And, you know, Google has always been, you know, in the forefront of inclusivity, whether you call it Gmail, whether you call it search. You know, it is empowering billions of users every day. And just to summarize and give a recent example, we have very recently Sundar Pichai has announced that the JEE main exams, mock exams are available on Gemini free of cost for any student to try. That’s the inclusivity we want. We want to make sure that student at his home can keep on trying the mock test at free.

Akanksha Swarup

All right. Amazing. Amazing. Which is inclusive. Inclusive and democratic. Many thanks to you three gentlemen. It was a pleasure having you all over here. Thank you so much.

Srirang Deshpande

Good morning to all of you. As Rakesh has already introduced, two companies are planning for a lot of things together. As I said, I am part of strategy for India and managing Vortiv strategy and market development. The important thing, what we are bringing today for you is, as we see a lot of gigawatt infrastructures are getting announced, and that poses a lot of challenges for us. Till this time, data centers are getting built from outside in approach. and then now time is there or time has come where data centers are getting filled from inside out approach. So it’s first GPU gets decided or the workloads get decided and then the whole infrastructure gamut comes into the picture.

To discuss this, I have two friends, two industry veterans from Vertiv and NVIDIA to discuss the Fireside Chat. So we have a Jigar, I think by this time Jigar is already known to the industry because immense contribution Jigar has done for the AI ecosystem in India working with all the ecosystems, all the layers, infrastructure application, use cases and so on and so forth. He managed solution and engineering for India in NVIDIA and I have another my friend Peter Panfil . Peter is Encyclopedia in Vertiv. He is based in US. He is our senior vice president for technical business development. And he’s the one who’s involved into many designs, a large scale data centers and gigawatt designs.

I would request Jigar and Peter, please come on the stage. Let’s have a round of applause for Jigar and Peter. So, Jigar and Peter, it’s all yours now. Go to

Peter Panfil

Thank you. Thank you. Thank you. So, my friend, we got our introductions. Let’s see. Are we on? Are we on? You guys can all hear us? Yeah? Can you hear us? Good? We’re good? Okay. All right. So, my friend, great to see you. Great to see you. So, I got to start with how we would normally end. I believe that any discussion like this should start. with us telling you what we think you’re going to get out of this. So what key message or messages do you think this audience needs to hear before we get started? And then we can spin off of that and go into the kinds of details we really need to.

So where do you think, what do you think this audience is the most interested in?

Jigar Halani

Okay. Am I audible? Okay, great. So I think as the topic suggests as well, my view, what you will get to hear us next 30 -35 minutes is about why this AI is becoming so much of notion for every country. What is that is the building blocks of these AI factories and the sovereignty aspect of it? What is it two of us are trying to contribute in this journey for everyone for that matter? And how do we scale? and make it work for everyone, to make AI for all, how India wants to call it as, AI for all, is what I feel we should be discussing about here. Because that will be most relevant for the conference, for the audience, and what we can contribute back to the humanity as well.

What are your thoughts?

Peter Panfil

I agree with you completely. So the three things I feel are most relevant are speed at scale. Now, it’s not just the speed of the compute. It’s the speed of deployment. The faster we can get the GPU structures in place, the faster we can benefit from it. And scale, you and I talked about the scale. And you’re going to quote some numbers, I think, that the tops of their heads are going to blow off. But speed at scale. The second thing is we’ve got to stop. We’re not thinking the way we thought in the cloud world. In the cloud world, we were thinking a high -density rack was 10 kilowatts. And that we would start at the source, at the grid, and work our way to the chip.

What I’m here to advocate for you to do is start at the GPU. Start at the chip. Let’s start at the chip, define the most economical, most efficient, fastest from a compute perspective, and figure out how to deploy that as a pod, then replicate that pod, and achieve the speed. And the third is, don’t be scared. We got it covered. We got you covered. We know how to do this. This, we made a big, I got to just tell you, I told you this in the hallway. Vertiv made a big bet. with NVIDIA. We made a big bet. I actually reassigned myself. I was leading what we call a GSA, Global Strategic Account Pursuit Team, and I said, if we’re going to do this right, we’ve got to immerse ourselves in GPUs, understand how to deploy them, understand what drives our customers, and how we’re going to make them successful.

And I think that that has worked to both of our benefits.

Jigar Halani

Absolutely. And through the humanity as well, right? We are fundamentally changing everything that has been pursued so far, and you bring out the cloud part of it. I was just thinking while putting my hand on my beard that only a few hairs were white back then. It’s not that far that I’ve seen the retrieval clouds. We store the information and we’re just retrieving it to process the application to get us the information out, right? to the world of now generating every single time a new data and processing it right there to give you all the time a new input and a new output, right? Because prompts are new, the outputs are new, and thereby the world sees every time something different which is getting processed and being delivered to the customers, right?

So such an amazing and a fastest -paced change of how these clouds have emerged and what are your thoughts in terms of what this space is all about, how our customers are keeping up with this, and what are we contributing in that journey, if you can throw some light towards that.

Peter Panfil

Sure, that’s great. So first of all, it comes with understanding and having a transparent provider that says, here is what I’m producing today, here’s what I think I’m going to be producing a year from now, here’s what I think I’m going to be producing two years from now. Now, our goal is to make every deployment that you take on an AI factory. We all know what an AI factory is, right? An AI factory, think of it as a car factory, washing machine factory. Just, it’s a data factory, okay? And so our goal, I will just tell you, our goal along with your team is start as an AI factory. Yes, you might want to have mixed mode CPU and GPU workloads in your facility, but you’ve got to pilot the GPU configurations, at least pilot them.

When I say I reassign myself, I was working primarily with cloud providers, mostly hyperscalers, and they had a prescriptive formula. You know, they had their hacks, their number of racks. They would deploy them. We all knew which ones they were. Now, we can take a GPU pod, design it once, build it many, and apply it to the GPU that we need from that generation. It’s a complete change in the way we think about how to deploy the IT.

Jigar Halani

That’s so true. By the way, did you notice, every time we are talking about GPU, the screen is blinking. There you go. I think that’s a good message.

Peter Panfil

I think it’s because I owe somebody a nickel every time I use the letters GPU. It must be trademarked somewhere, all right? So I owe them a nickel. Okay, all right.

Jigar Halani

No, so I think the transition that we see because it’s generating something new every single time, the compute demand because of which is just exploding, right? And thereby, the possibility of what we could do more and new is every time becoming bigger and better, essentially. Right? And with that, I think the journey of data center is also evolving much more faster than what we have thought, right? You mentioned it, 10 kilowatt to 15 kilowatt, not that far. We were talking about this about four or five years back. To 40 kilowatt, what we transitioned it to it, to now to 120, 130 kilowatts. And as we announced it in January, we are now talking about 240, 230, 210 kilowatt per rack, which means this size hall could probably run a great portion of India with so many services that is probably never imagined before.

Peter Panfil

So I think it’s interesting that you comment about that, because one of the things that we’ve heard back from our customers who first do a lot of research, how do they take their critical infrastructure from CPU -based to GPU -based? And I think that’s something that we’re seeing a lot of growth in. First, there’s that transition to liquid. Don’t worry about it. We’ve been doing liquid cooling for 40 years. We know exactly how to manage it. Then there’s the density of the compute itself. I’m amazed at how quickly and easily our customers understood the move from a 10 -kilowatt rack to a 130 -kilowatt rack. I credit you all. So if you’ve already made that transition, I credit you.

You’re doing a spectacular job. Our job is to prepare you to have that go up by an order of magnitude. Not right away, but in future generations of compute. And so what we try to do is we try to prepare you for future -ready thinking. I know you don’t want to think three years down the road. You can do it. You can do it. You can do it. You can do it. You can do it. You can do it. let’s at least think three years down the road in three years based on the rate of what you’re seeing what we’re seeing both here in India and around the world

Jigar Halani

my perspective is I think all reports are talking about 5, 6 gigawatts kind of a number over the next three years my personal understanding from the lens I look at it both from NVIDIA as well as what industry and government is trying to do my anticipation is we will cross 10 to 12 gigawatts in the next three years and that’s not far and I’m not going by any of the announcements that has been made in the last three years. I know where the reality stands in terms of what inferencing and training workloads. I repeat, I started with inferencing. I did not start with training.

Peter Panfil

Yep. I noticed that.

Jigar Halani

The reason is we are a consumer country. Make a note of that. Yes. Right? He started with inferencing, not learning. Yes. All right? Because we are a consumer country. We have always been in the mode of first to consume, then to build. And thereby, we are the largest chat GPT consumer base for the globe. We are the largest for public city. We are the largest for even for Gemini as well. Right? I think we were number two about a month or so back. But my view is with this geo announcement, we should have crossed by now number one position. But the delta was pretty small. Right? What does that mean? That means. If that entire compute capacity.

that is currently not getting processed in the country should come back to India because of the DPDP law that has got enforced last month or so, then this number will be even higher. And we are very democratic that way. You know, we are not closing the doors for any businesses. We have never done that. I’m sure, knowing the country, we will never do that with this leadership that we have from Prime Minister Modi. That means we will still allow these processing to happen outside of India, but at the same time, we will do the regulatory reasons of some of the verticals, say, fintech, say, healthcare, defence and so on and so forth, or some of the government, you know, bodies.

Even if they start to do influencing locally, this number will easily touch 10 plus. And I’ve not included the industry at scale yet, which is what Anthropic and J &J, Gemini and others are trying to even capture it from that market perspective. So my understanding, it should cross 10. while all the reports are talking 5, but India will

Peter Panfil

So it’s amazing. We didn’t compare this note before we got on this stage. What was the number you gave me just 20 minutes ago? 10, right? So let’s think about that just for a second. We’re at 1 .5 now. We’re going to get to 10. So to get to 10 in that three – to five -year horizon, we’re going to have to scale pretty far, pretty fast. We’re going to have to draw on our shared expertise. And by drawing on our shared expertise, we’re going to help be a trusted advisor to you. Who’s your trusted advisor? I’ve got my trusted advisors. Somebody. Somebody I can always go to, and they’re always going to give me. the right answer. It might not be the answer I like, but they give me the right answer.

So what we want to do is make sure that you know, we understand how to scale. You understand how to scale. We understand how to scale. I think if we’re talking about doubling, getting to 10 in three years says we double every year starting this year. My one and a half goes to three, my three goes to six, my six goes to 12. We’re doubling every year. Now, if I was to take you outside of India now, North America market, when the North America market first started becoming aware of GPUs, there was a wide variety of acceptance. There were the folks that said, yep, I want to be there. And I want to be there, and I want to do a pilot with you.

I want to design a pilot that I can replicate into all of my either hyperscale or multi -tenant data center environments. The other thing they wanted to do was no data center left behind. They didn’t want to leave behind any capacity because they knew capacity was going to be the currency. They knew power and land and GPUs, that’s where they needed to be. The third thing was their project scales moved. We used to live in the cloud world at project scales of 18 months. We live now in the GPU world of project scales of between four and six months. So a dramatic compression of schedules, a dramatic increase in capacity, what does that mean? We’ve got to build capacity at a faster rate.

Than we ever have before. and I know we’re up to it. We’ve added the capacity we need to be able to support that kind of demand.

Jigar Halani

Peter, that actually brings to a very good question. When we talk about this at scale, and you said that in the U .S., you guys have already started to build this at scale because you see this as a great opportunity, essentially, and India is yet to build, right? In all fairness, I think some of the largest clusters are in 10Ks of GPU, essentially, right? While in the U .S., we’re talking about millions of GPU in a single data center, essentially. Would you like to throw some light on some of the learnings and, Kodi Kakar, a quick bite for the audience to know what are those quick things that India could do in terms of having these things done in, let’s say, three to eight months’ time frame, not just the project planning, not just the understanding of BOQs, not just the understanding of…

who is going to deploy my project and how does the project look like and the 3D version of that. How do I get the entire project done in, let’s say, six to eight months’ time frame? Including from land is what I have, and from there onwards, GPUs running and hugging and making the production environment happen.

Peter Panfil

shifted to 250. Now, along the way, we said, okay, let’s take these 10s and put them together and make a 50, and let’s take the 50s and put them together and make 100, and let’s put the 100s together and make a 220. Shoot me now. What we found is, let’s pick an optimum building block that supports the number of GPUs that is, I’ll call it, reasonable at scale, don’t take a design that has never been created before. Let’s take a design that we have a good basis on. For example, the pod. You just published some standards on pods. Reference designs.

Jigar Halani

Reference designs, okay.

Peter Panfil

We worked closely with your team on reference designs. We came up with the magic numbers that are reference designs that minimize the amount of, of underutilization, so maximize the utilization and make them the most efficient. I’ve been an advocate for efficiency within the data center space my entire career if you save a watt that’s a watt you don’t have to generate at the source you don’t have to distribute it you don’t have to reject it so the fewer watts you lose and the more watts you can put into the compute the more tokens I can generate and so our goal our goal in working with the GPU I’ll call it the AI factory mentality is how much power can we deliver from the source to the GPU as much power as we possibly can and how can we deploy that physically as quickly as we possibly can and it boils down to take the reference designs we’re not saying all the designs are going to be the same we know that’s not going to be the case so but I could show you a pod design it’s part of the reference design I could show you a pod design that supports three generations of GPUs so this year, next year, next year after that three generations of GPUs just by changing the way those pods are populated on the compute side and in fact we’ve got one customer who wants to be able to seamlessly mix GPU platforms within a pod he says I’m going to have one compute line up number one as one generation of GPUs, pod two is another generation of GPUs third generation of GPUs, so they want to be able to seamlessly move between GPU generations because at some point they’re going to optimize particular functions and particular outputs and services against a GPU platform.

Jigar Halani

You just brought up a perfect point, right? So a few things are why it’s important to follow the reference design. Just to bring everybody on the same page, CPU world was very different. Having a node down means few hundred dollars getting downtime. A GPU node down translates to few thousands of dollars going into a downtime, right? And the fortunate or unfortunate part is if your training workload is running, if a node fails, you start from the checkpoint that you have done it. Assume that your checkpoint was done eight hours before. For eight hours of, say, 4 ,000 GPUs of time multiplied by that much is what you have lost the compute time in the cloud. Unfortunately, that translates to…

Peter Panfil

Real money.

Jigar Halani

Hundreds of thousands of dollars. Real money. Right? Real money. So while you as a cloud provider might be thinking, and I’m talking about both the sides, that, hey, let me do a little bit of cut corners, do something here, something there, and I’m still making the cluster up and running. But you know what? That’s going to cause a lot. And customer may not have SLAs with you in that direction because these are not the standard SLAs we’re talking about, right? What the world has seen in the typical cloud world. These are different type of SLAs that customer signs with you. And, you know, if it’s an inferencing workload and if it’s critical with the enterprises, we’re talking about down times, which is, again, by all law of cloud, is not acceptable.

But the key question could come in that, hey, why do I need these large -scale clusters only for training? Is that the only thing I do it? The answer is no. I don’t know, but I’m sure most of you might be following what Jensen talks about it. The three scaling laws that we have it. We’ll not go. We’ll go into detailing of it. I think Jensen has mentioned it like at least 100 times of his keynote. But in a simple term, if I have to tell you, let me take one or two good examples from the country itself, right, and what we announced in the last three days. So, taking a very simple example, as everybody knows, we are 1 .4 billion people, right?

Half of the audience is associated with farming in the country. Half of the audience or the, you know, citizen base is associated with the, you know, farming, and thereby one -third of the families of the country are completely aligned to the farming aspect of the story, right? They contribute just 15 % of our GDP, but half of the population does and associates with the farming, right? Now, government of India has two simple applications that has been launched, right? One is to check the subsidized… Food, you know, that government gives it to these half of the, you know, audience, half of the citizens in the country today. subsidized to the level which is a cent or two. In Indian rupees, it is one rupee to five rupee, how government gives it.

And a feedback call goes to all these citizens, asking how was the quality, did you get the right quantity, have they done any kind of fraud, and so on and so forth. A call per day, if government has been able to scale in the last one month to about 50 ,000 calls a day to citizens through a bot, which is talking in a local language, has been able to save a fraud work of around per day, and I’m talking about per day, in the range of a couple of millions of dollars. Okay, take that fraud. Talk about financial fraud. This would be another one, right? Because we are the world’s largest online payment transaction country. We contribute 50 % of the digital, and that’s…

by the NPCI data, globally acceptable, and it does at free of cost in this country. We call it as UPI, right? Most of the Indian people would know about it, right? And imagine the innovation that takes place in the fraud, you know, when the UPI transactions are taking place, right? And I do this transaction using mobile, from your mobile to your mobile, in a fraction of milliseconds. That data is in hundreds of millions, right? To prevent this fraud is where the AI is getting used. Now, if I’m putting a couple of hundreds of millions of dollars for five years as an initial investment, think of the economic benefits and the money back that I’m giving it to the citizens by not having these frauds.

And thereby for each of these applications, right? I have another good example, which is we have 22 official languages speaking in 500 dialects in the country, unofficial languages. So, officially, we have over 100 plus languages in the country. right government of India has an application called bashingi which does basically the translation you know and ASR and TTS in all different languages of India and government of India and state government has about 10 ,000 websites that government runs it we have only touched 1 ,000 and we are already hitting 100 million requests per hour right and this translates to in a simple term roughly about 2 million 2 megawatts of data center consumption per minute right in in 2 megawatt if I’m able to cater to 100 million requests a minute that’s massive

Peter Panfil

yeah look at the productivity improvement and I’m bringing it to the nation’s and this is just thousands of those websites of government of India so we yeah so we we talked we started with scale at speed Okay. That’s where we began. It’s not just the scale of the data center environment. It’s the scale of the applications and the benefit that they’re going to bring when they get fully populated.

Jigar Halani

Absolutely.

Peter Panfil

So I’m going to put you on the spot. How much do you think, on the journey right now, where are we? Are we at 3%, 5%, 10 %? I will tell you, I cannot wait for AI to take every mundane task I have to do in every day of my life and just do it. Okay? And then once those mundane tasks are out of the way, I can use every gray cell up here for productive work.

Jigar Halani

Absolutely. Absolutely.

Peter Panfil

So where do you think we are in terms of that scale?

Jigar Halani

But you touched upon a good point, how Meta calls it as personalized AI for everyone. So we are getting there, right? But in terms of data, and I think even Minister made that announcement yesterday when he was talking at the inaugural. He gave a nice statistics talking about we as India generate 20 % of world’s data, and what data center capacity the country has today is 3 % of the world’s data center capacity. So which means even if I don’t assume data generation speed over the next 3 to 5 years, even if I keep the data rate at 20 % only, and we are a young population, so we are bound to generate more data, and ours is the cheapest data rate in the world for 5G that we have, but assume that we don’t.

We don’t generate much of data, right, and we restrict it. We still have a long, long way to go in building the… large -scale data centers just to make sure our own data we process it by ourselves. Right? And that’s where the whole theme of this sovereignty, what government is talking about at least let’s protect our data. It’s more critical, not the general data. And that’s where the gigascale is more important.

Peter Panfil

But I don’t look at the sovereign data center approaches so much as a protection. I look at it as, where’s the most efficient place to process the data? It’s where the data is generated. The most efficient and effective place to process data is at the source of the data. Absolutely. And we are limited by energy. So we want to protect that layer as much as possible. And so I see a world where the data gets generated, it gets processed as closely and as quickly after the generation of the data. It’s used to further improve the performance and generation of subsequent data. So that data gets cleaned up as it goes. It gets more refined and more accurate.

We all know we make good decisions with good data. We know that. We make bad decisions with bad data. So the real issue here is we’ve got to take the data. I won’t say that our data is not clean now, but it’s not. Okay? I mean, you spend, give the audience an idea when a model is being put together. How much of the time is actually in cleaning the data and pre -processing of it, and how much of it actually goes into the language model itself?

Jigar Halani

So just to build, because India just announced 10 of their foundation models, cleaning of data typically is three to six months of journey on thousands of GPUs. for a language model that we are trying to build it in, right? If it’s a specific model for a particular task or a vertical that we are trying to build it in, and if the data is more notorious in terms of having more videos and images and stuff, it could be even longer.

Peter Panfil

Got it.

Jigar Halani

Right? And then comes the foundation model building itself and, you know, conversion of the model. That’s another 6 to 12 months of journey. It depends on the model size and type that you are trying to build.

Peter Panfil

So it could be a third of the time of realizing my language, my large language model, is cleaning of the data.

Jigar Halani

That’s correct.

Peter Panfil

Processing that data. Now, once it’s there, I’ve got a solid foundation of data to use for future models.

Jigar Halani

That’s correct.

Peter Panfil

Okay. So, again, I think if we’re talking about it in terms of percentage, are we 5 % there? Are we 10 % there?

Jigar Halani

So I would not put percentage. The reason is what type of model we are trying to build it in. depends on that. If it’s a language model, I think specifically to India per se, I will not comment about other countries because it depends on where their journey is in their data building. But India, in my view, has already nailed the data creation for a mid -sized model, how we call it, as a small to a mid -sized model in play. And they’re going to make it open source as well, how it has been announced. So, I will not claim it that we have a very large data set for a very large model, but a small to mid -sized, I think in the last one, one and a half years due to this India AI mission, we have been able to generate a pretty good amount of data and a pretty amazing clean data.

Peter Panfil

Perfect. Alright. So, I’m getting a hook from the guys in the front row. Okay. Yeah. I run long. I’ll always run long.

Jigar Halani

Sorry, I’m pausing you there. And I want to diverge a little bit right, asking as an India person. and I’m an Indian first, then I’m an Indian, what is that Vertiv is trying to contribute in this journey for, let’s say, India to begin with, and for the globe as well, you know, in the building blocks of these things that we are trying to do for these gigawatt -scale data centers? If you can throw some light.

Peter Panfil

Sure.

Jigar Halani

I know it’s a little bit silly question.

Peter Panfil

No, it’s not a silly question.

Jigar Halani

No, no, it’s not. We want to push manufacturing. We want to push, you know, India ecosystem, be more indigenized as much as possible, be more self -reliant. I want to know what Vertiv is trying to do.

Peter Panfil

So Vertiv is investing in people, in process, in production capacity.

Jigar Halani

Amazing.

Peter Panfil

Our goal, our goal is to build as much of the critical infrastructure here in India as we possibly can. And what that, it starts with. Working with our partners and our customers on first pilots and then production. So that production, you’re going to benefit. I will just tell you, India, you’re going to benefit from the mistakes that have been made in other regions for the last 12 to 18 months. So you’re going to benefit from that. You’re going to be able to jump right to it, all right? All right, so here’s the sum up. I asked you to give what you thought the panel should, this discussion, they should get out of it. What should they have heard from us that you want them to keep in their minds for the rest of the day?

Jigar Halani

For the rest of the day?

Peter Panfil

Rest of the day.

Jigar Halani

My view is, and I know it’s going to be a mix of audience here, you should be listening to it. How are these building blocks of AI factories that are getting to learn from the globe that India could adopt it fast, followed by? Followed by what’s happening in the modern world, because that’s the most and the fastest. building things that is happening in the world and the most fascinating because changing the world is so fast, followed by how are these models getting deployed, right? And what are the applications which are changing our world on a day -to -day basis? And fundamentally, the businesses are getting challenged on how they have been operating it for decades or centuries, right?

Versus how they could do this business today. If I would be you as audience, and that’s what I’m trying to do, being the audience as well, trying to constantly learn from this conference, what’s that the people who have done this at scale, what can I learn from there that I can deploy back in my country, in my profession, in my day -to -day life as a learning, is what I’m trying to do. And that’s what I would recommend everyone else to do as well.

Peter Panfil

Perfect. So let me add on top of that. It’s scale at speed. And it’s not just speed of build, it’s speed of compute, it’s speed of adoption. Yes. Second, stop thinking grid to chip and start thinking chip to grid and let the chip help us define what that critical infrastructure needs to look like and the third is we’re going to make it as sustainable as we possibly can because a lot that I don’t waste is one I don’t have to generate transmit or reject alright I think I think you’re up next Any questions? Can we have time to take questions from this? Okay, we have. Can we? Okay. Okay, we have one end up. She’s going to run a mic over to you.

Yes.

Audience

Hi, my name is Ani. I have a question. As I can see…

Peter Panfil

Use your outside voice. That’s what my family always says.

Audience

As I can see, everywhere is AI. And in today’s era, it is totally about AI. So, as you also said that this is a… AI whereas everywhere industry and company and education in every sector using AI. So the day is not getting far once AI humans is totally dependent on the AI and once AI is in the subconsciousness as humans thinking as humans. There is any chance where humans and AI both are in the same niche?

Peter Panfil

So I think that early on AI got a bad rap. It was going to be the computers were going to take over and blow up the earth. That’s not what we’re finding. What we’re finding is that AI makes our life easier. life better every single day. I know that traffic systems in the city that I’m in now use AI to look at traffic congestion and traffic patterning. And they actually time the lights to improve the throughput on particular roads at particular times a day. Now, that’s where AI is going to really benefit society. It’s going to benefit it in transportation, in medicine, in research. I’m not so worried about the data being used for evil.

I’m really excited about the data being used for good because that’s where I think we’re going to get the most benefit.

Audience

True. But what if AI get their own subconsciousness? They don’t need humans to just act.

Jigar Halani

I wish you see that day. Somebody told me when I started my journey with the phone. That’s what is going to happen. You will lose touch with your family. You will always be busy with the phone. And so I don’t think so. We have even touched that level as a surface, even after having this phone with me for 20 years.

Peter Panfil

Here’s the example I like to give. Do you think about breathing and blinking? No. You do it automatically. So let’s let AI take those autonomous functions and do them for you automatically so that you don’t have to think about them. And then if I don’t have to think about breathing and blinking, then all of a sudden I can use my brain matter to do other things. So many things. So. I look at it as it’s going to free us. It’s going to free us from the mundane tasks that breathing and blinking. Come on, you’re laughing at me. But do you think about breathing? No. You only think about breathing when you’re trying to hold your breath.

Okay? So I think what’s going to happen is AI is going to become to us like breathing and blinking. It’s going to become an autonomous function that just runs in the background of our lives constantly and makes it better. It’s going to learn what we do and how we do it and how to improve that performance and give us more freedom to do what we really should be doing, and that is making the world better.

Audience

Thank you.

Peter Panfil

Thank you. That’s a good question. I’m glad you asked that question. We have one more. We have one more? Yeah. Hey, hi. We’re going to that side. Hello. Big one.

Audience

This is Shlom. I was watching the interview of Mr. Jensen Hong from NVIDIA, and he explained AI as a five -player stack, you know, energy -cheap infrastructure model and application. Which layer do you think, he also explained how US and China are working on different layer and how they are, you know, ahead of us many years in different layers. Which layer do you think India can excel with them or match with them in upcoming years?

Jigar Halani

So, I think we are already doing that, right? It’s a great question. When we talk about sovereignty, these are the layers we should be sovereign, essentially, right? We cannot be importing energy from anybody. We need to generate by ourselves. Otherwise, how will we run these lights and so many functions and how will we power these data centers, right? So, the good news and I think Prime Minister gave this answer so nicely yesterday in his keynote. He explained, no, the Minister said about it, sorry. He explained these five layer cake once again and I am proud to say he made that statement which we all know it. Half of our energy today is generated which is a green energy, right?

So, that layer is sorted. And I and you have a lesson to learn. We have to contribute more of the companies who have to contribute by having solar, hydro and air and other methods, right? Where NVIDIA is trying to contribute to the nation today is on the top three layers, right? We are helping the nation with AI factories. building it with all the learnings what Peter also mentioned. You don’t have to learn from all our mistakes of last 18 months that we have undergone in other regions because they were ahead. India was slightly delayed by at least 12 months or so. But we have put in all those learnings and the factories have come up way too faster than anywhere else in the world, right?

By all means. The second layer is one of the layer is the serving layer when you build these applications. How do you do inferencing? You’ll be surprised to hear Indian cloud providers never had a control plane, right? We were dependent on other nations to give us a control plane to run these cloud inferencing stack. NVIDIA has open sourced that work and shared that with government of India and that was the announcement that Sarvam did with the product named as how we call it Prava, if I’m not mistaken. I hope I’m pronouncing it correctly. And that layer is now completely owned by government of India and an Indian company to do the entire inferencing locally. right?

And the last piece, which is the application, right? I’m sure you would have visited the booth downstairs on the Hall 5. I don’t think that we have left any booth. Every booth is powered by NVIDIA open source stack that we have given it to build the agent -DIG AI platforms and formulation models. That’s the contribution we have done it for the nation. And India is right there. I think what’s missing, and I will fully agree with that, is we are missing with our own chips, right? And that’s the autonomy that every country is trying to drive across. I’m again proud to say that NVIDIA is fabulous. We don’t produce, right? We outsource that to Taiwan and a few other countries, essentially, right?

We have opened up partnerships in many countries, and we are very open to partner with India as well to give away our technology. Thank you. We will do the modifications and do the manufacturing by themselves. That’s the last piece which is left, and I’m very confident with this Semicon mission. this is going to happen very soon, even if we NVIDIA with somebody else.

Audience

Thank you so much. The past year time, we’ll have to get into our next session. 10 megawatt, 12 megawatt, and today we have…

Peter Panfil

Gigawatts. Gigawatts, baby. Gigawatts. Gigawatts.

Audience

I just wanted to leave important information. It took about 8 years time to build one 5 gigawatt. And another 10 gigawatts is going to happen next year. So look at the speed and scale. We both have to work together. And as Jigar rightly mentioned, all 5 layers will have a tremendous opportunity to work. Energy, infrastructure, compute, models, application, and so on and so forth. Huge amount of resources required. Huge amount of support required. And very exciting time ahead. Thank you so much.

Peter Panfil

And it’s going to be a system approach. System. Systems. Think systems. we as an industry have thought boxes for too long. We think, I got this compute box or that compute box. It’s now a system. It’s a platform. And that platform generates tokens. The new measure should be tokens for watt per dollar.

Jigar Halani

Absolutely. Absolutely. Very well said. Thank you so much.

Moderator

He’s one of the guiding principles to implement a lot of large -scale data centers for Vertiv or all the entire ecosystems. Let me welcome Srikanth on the stage. A good round of applause for Srikanth. And another gentleman we have from Vertiv. He’s about 35 years of experience in leadership roles in Europe, Middle East, Africa, India, Southeast Asia, Asia, you name the region. He’s been there for many years. His name is Sanjay Sainani. He joined us as a senior vice president, technical business development. He’s the one strategizing all technical strategies for Vertiv and a business development area. Let me welcome Sanjay on the stage. A good round of applause for Sanjay. And I’ll be asking some questions on behalf of you.

I would also open the floor maybe sometime later. Welcome. Yeah, am I audible? Okay, so let me start, Srikanth, from you. Last question first. First, what is the one learning you want to give it to the audience from your experience of implementation when you build a large AI -scale factories? That was my last question, but I want to ask you first. One piece of advice or experience? Experience, out of your experience, because you already have good hands -on on implementation. So from a sustainability standpoint, implementation standpoint, what is one learning you want to give it to us in India when we’re building a scale of the factories and things like that?

Srikanth Cherukuri

Yeah, it’s an interesting question, right? Like when… One year back or one and a half year back, I came to India to review some data centers. And when I was asked to do that, one of the first things that crossed my mind is, wow, India is building data centers at scale? Because when we were growing up, power used to be a big issue. The reliability of the power used to be a big issue. The availability of power used to be an issue. And when I came here, I was amazed at how far, you know, I have been away from the ecosystem for a little bit, but I was amazed at how far things have come in terms of availability of power and the reliability of that power.

And the second thing I was amazed at is also just the knowledge here in the ecosystem as well as everything related to everything from safety to speed of light construction and the product ecosystem has come such a long way. I think the next step in terms of where India is going in this AI factory build -out is if you look at the U .S., it’s a little further ahead in terms of gigawatt scale and high -density racks, deploying high -density racks, high -density liquid -cooled racks. There’s a lot more experience over there. And I think our combined companies have created that experience. Like I’ve been working with WordUp for the last four to five years in the R &D work, engineering work, and then eventually the deployment work.

So now we have actually matured a lot in what we consider AI factories versus data centers. So there is a lot of advantage for India to draw from that experience. Our combined knowledge pool, again, it’s the same company. Whether you go to Europe or… US or India, it’s still Word of an NVIDIA. It has to be a strong cross -pollination between the ecosystem in the US and here, a strong knowledge sharing. And we are in year two or year three of this AI factory build -out worldwide. And as India is picking up pace in this journey, there’s a huge opportunity to not relearn all those hard lessons, or the hard way, but instead share that knowledge, share, you know, our combined teams share that knowledge and build it much faster here.

Moderator

That’s first, as a thought leader, both sides we need to do that, we need to equip the market for those kind of things. And let me also tell you, on Vertiv’s side, whatever innovations we are doing in the US, we are real -time bringing to India, so that there’s no latency here, and absolutely whatever is going to happen in the US, we want to bring it to India. That takes me to our next question to Sanjay. Sanjay, we have heard about speed, and so far we have heard about speed of clock. Now, Peter, sometime back, and Jigar spoke about speed at scale. what is your thought process about speed at scale or ramp up of infrastructure happening at the speed level what’s your thought process

Sanjay Kumar Sainani

I mean most of us who are in the space of mission critical applications and then within IT and if you’re dealing with semiconductors we all know Moore’s law and that was pretty much a 10x almost a couple of years in terms of performance and while performance was 10x the energy required to reach that performance was probably 2 -2 .5x every generation so you were getting amazing efficiency in terms of performance because you were getting a 10x performance with 2 -2 .5x kind of additional energy usage and that’s what you saw for the past many many decades and we all thought that Moore’s law is kind of now reached a plateau there’s not much happening … and this is where companies like NVIDIA, working with other semiconductor ecosystem, came up with multi -tiered chip structures.

When you look at today, some of the chipsets, these are three -story, four -story, six -story buildings. If you had to look under a microscope, there are layers and layers of transistors, billions of transistors layered together. And the innovation that is happening now or that has kick -started now is again kind of retracing Moore’s law. So if you look at what NVIDIA is announcing in terms of the new generation of chipset, there’s a humongous amount of performance improvement every generation. While the performance generation is 10x, 20x, 50x, the energy consumption is also jumping up. It’s not 10x, but it’s 2x, it’s 2 .5x. So like Jigar and Peter mentioned a little while ago, you have the current generation.

The current generation at 130, 140 kilowatt per cabinet, while the next one is 250, 260. and the one down the road is 400, 500 kilowatt per rack. And while I don’t want to give away a bit too much, but one megawatt rack is not too far away. People are already testing it. So now think about it, one megawatt of rack. A few years ago, the whole data center was one megawatt. The white space would have 200 racks of five kilowatt each, and you had generators, chillers, transformers, facilities supporting that one megawatt. So the white space was 80 % of your footprint. The rest of this stuff was 20 -30 % of your footprint. Now this has flipped. You have only one cabinet. But you still need all of that.

You still need one megawatt worth of power, generators, chillers, transformers, everything. So in that context, if you see, we are innovating at tremendous speed. If you invest anything, today it’s outdated two years down the road. So that’s number one. That’s a challenge. The second challenge is that it costs a lot of money. Jigar mentioned the cost of a data center may be a billion dollars, or let’s make the numbers a bit more reasonable, $100 million. But the amount of GPUs sitting inside is probably $2 billion worth of GPUs. So now if I place an order today with $2 billion of GPUs, I want to monetize this project very, very quickly. If you build a project, and in olden days we used to build a home in India, not just in India, in most other parts of the not -so -developed or developing countries, would have people carrying bricks on their heads and building a house.

It takes two years to build a home. Now as a homeowner, you don’t see that as a problem. You’re trying to save $5 a year and $2 a year. You’d rather have a person taking a brick on the head rather than bringing a cement mixer because you thought you were saving money. In this world, you’re losing money. because the money you are spending is still going to be the same, probably 10 % cheaper, but your return will start after two years because you will monetize that investment after two years, after three years, because only when you turn on the switch, only when you turn on the tokens, that’s when you make money on your investment. So it’s speed to token.

Whether you spend $100 million or $1 billion, you need to spend it fast, get the factory up and running very fast, so that the token comes out very fast, so that you can get your return on your capital employed. So if you are anyone here who is from the finance industry, Rocky, return on capital employed is a seriously important KPI for money. So that’s speed. And the third is scale. The demand is so heavy. Jigar and Peter in their conversation talked about a few kind of areas where they have high applications. Think of agentic AI as what it can do for you and in how many areas of your daily life it can affect you. The scales are crazy.

And so not only we need to work on the degree of difficulty in terms of density, we need to deploy it tomorrow morning and we want to deploy it at massive amount of scale. And that’s the kind of problem statement or opportunity that we have.

Moderator

So Sanjay, when you say speed at scale and that’s an idea which you have given because every month or a week save to deployment is going to be a go -to -market fast, right? And generally when you have to speed at scale, you also have to design for scale. And that’s where the blueprint discussion starts. Now why, Srikant, when it is a blueprint, it starts from a GPU architecture or GPU cluster architecture. What is your thought process? When you say… Scale for design, you not have to scale for it first which GPU you want to go with today and then you scale for that. What’s your thought process when you talk about why the GPU has to start with the… Why the blueprint of any data center has to start with a GPU cluster?

Srikanth Cherukuri

Could you repeat the last part again?

Moderator

Okay, when I say when we have to speed at scale, we have to design for the scale. And that’s where the blueprint of GPU starts. GPU is the first thing we need to start with. And why is that?

Srikanth Cherukuri

Yeah, I think there’s a couple of things, right? When we first started designing, you know, the early phase of AI factories, we were relying on, you know, general purpose built data centers. And we were changing them rapidly into what… They weren’t even really AI factories, but we were trying to figure out how to make it work, right? It was not designed at scale. It wasn’t designed for… It was not purpose built designs. But I think the moment came on us so quickly. And again, NVIDIA and WordUp together foresaw that moment. We didn’t foresee the scale. We foresaw the moment. And we went… We went from very quickly from 10 megawatts to… Now we’re talking about gigawatts.

And so infrastructure doesn’t move at that speed. Infrastructure moves, you know, the design can move at that speed, but someone has to actually build out the AI factory. Someone has to build out the data centers. We have to make so many CDUs. So we were in a phase where we made it work, but we made it work in a very, we had to make it work way, right? If we had to do it all over again, that’s not how we would do it. So now we have a moment where we say, okay, if we were to do it the right way, now we know what the future looks like. We know, that’s why we’ve redefined the data center as an AI factory, which is a fully integrated, you know, where you go from a chip design to system design to the liquid cooling design or the power design.

It’s all, in fact, even the shell and the campus is all purpose -built as an AI factory. So we have to start thinking both in terms of design as well as manufacturing, as well as delivery, as well as operation. We have to think and start thinking about it at that scale. and I think we’ve already started doing that at the design. Like, you know, NVIDIA has a DSX reference design now, which is actually based out of word of, you know, smart run products and large -scale CDUs. So now we have to start deploying it that scale. That is one of the things that NBIS’s focus is, is how do we deploy it speed of light.

Everything from logistics to operations, everything is being redefined. So that’s why we have to, like, you know, you have to think of it as an end -to -end integrative product.

Moderator

So you say about we have to design for the future. That means every design what we do has to have a future proof. What are two important ingredients you want to suggest to our audience or all of us when you talk about future proof from a design standpoint?

Srikanth Cherukuri

Yeah, I think the biggest one that I still have to repeat sometimes because it hasn’t caught on is we used to think of, I mean, Jigar and others, have spoken so much about rack density. we have to stop thinking about rack density we have to start thinking about row and data hole level density because how we almost are slowly retrofitting the entire footprint to match an AI factor design we will not be doing that generation to generation that’s just very expensive if we keep changing the technology it’s going to be very expensive for you’re not only spending a lot on building it you’re spending a lot on retrofitting it we don’t want that because that’s going to eat into the ROI so we have to start thinking about I’m at 30 today, I’m going to do this 40 tomorrow, I’m going to do this 100 tomorrow, I have to do something else 200 or 1 megawatt, I have to do something completely different we have to stop that mindset we have to start thinking about it in bounding boxes data hole level or row level bounding boxes and that’s what our latest reference designs do which is start looking at the entire pod as one big block don’t change the technology optimize it with a future proof mindset, right, will this work for that one megawatt rack and today with the digital twins you don’t need to actually build it to do it, you can actually simulate that so that’s number one I would say is take those bounding boxes and take that bounding box mentality now map that technology wise map that right up from the chip to the utility which is same redundancy this redundancy for compute this redundancy for network and have that cluster mindset where you say you map the cluster to the power and thermal perfectly so that every watt goes into maximizing tokens versus going into redundancy and your old school way of thinking so I think if you combine both of those elements you would get into a future proof data center again it’s the hyperscalers have have mastered over the last 10 -15 years.

Again, we pretend like AI is the first time we’re doing infrastructure build -out, but it’s not. The hyperscalers have been doing since the late 2000s, right? So they have mastered the concept of a reference design, a global reference design, where you once lock in that design, you generation -wise you stay in consistency. You build a template and you just feed it out.

Moderator

I would like to ask the same question to you, Sanjay. From your perspective, what are two things you would like to offer from a design standpoint, infrastructure standpoint, when you want to give a future -proof design for at least for two or three generations, which Peter spoke about?

Sanjay Kumar Sainani

I think whether we like it or not, the speed of change in the semiconductor IT AI world is very different than the speed of change on the physical world in terms of power and cooling. And even the life cycles and depreciation cycles are very different. So, for example, compute storage or, you know, in the IT world is depreciated every three to five years because that’s the pace of evolution. generators, chillers, transformers, UPS batteries are deprecated 10 to 15 year cycle. So you got to figure out a way how do you run 2 to 3 cycles of IT within one cycle of infrastructure. This is a requirement. If you don’t do that to the point that was just made, Srikanth made that you would be keeping on investing and that’s not good business at all.

So now how do you do that? And in the cloud world again, we mastered that to the sense that in very simple English, how are we doing it today in the cloud space? We have a 30 megawatt data center. We have 2 to 3 to 4, 5 megawatt per data hall. Then we don’t worry about what’s inside the data hall. How does it matter? I have a 5 megawatt power, 5 megawatt cooling capacity. Bring whatever you want as long as it’s 5 megawatt, you’re good to run. The only thing that you are probably retrofitting, if at all you have a generational change, is the final mile of cable or connectors. Now, that becomes a slightly more complicated in the AI world because your densities are much higher while providing power is relatively easy.

Pumping a lot of air or now pumping a lot of liquid is not as simple. There’s much more piping happening. In fact, I joke with the people, the future is of electricians and plumbers, believe me. There’s so much plumbing now in a data center, you will need plumbers in the data center. So, the only way to do it is to again look at what was mentioned in the previous discussion also, is look at certain capacity pods, 2 .4 megawatt pod, 6 megawatt pod. So, now you have a pod. It fits certain number of GPUs of today’s generation. It has certain power capability and liquid capability and it’s done. All the upstream to that in terms of transomers, generators, utility connections is designed for 6 .2, 6 .4, whatever the case.

Now let’s say over the next three years, generations change. Well, all you have to do is reconfigure the cabinets, nothing else, everything else stays the same. Precisely what we are doing in the cloud world. It took us a couple of years to figure this out because this was all being done for the first time. But now this will definitely be the way to go going forward.

Moderator

So, Sanjay, let me bring to a very different topic now on energy efficiency. When we are talking about gigawatt scale, energy conservation or energy saving is the most important piece. Now, we as a country are a tropical, right? We have a temperature right from 10 degree to 48 degree. So in such a span, what do you think the right approach to improve the PUE? Okay, maybe water usage or what are the important best practices you would like to suggest to the market when it comes to saving energy efficiency or improvising the PUE? Of course, because of liquid adoption, it is anyway has scaled down to an extent of what was there for the normal. But what would be the next stage of best practices you would like to suggest through your experience?

Sanjay Kumar Sainani

I think the word PUE is, I don’t know if this is the right word, but probably a very abused word in the industry. It’s used so commonly, thrown out there so easily that everyone believes, well, I have a lower PUE. Well, first of all, I can give you better PUE without doing anything. I can increase the air temperature. Suddenly your PUE is much better. You think your PUE is better, but right now your computer fans, your server fans will speed up. The temperature is more. They need to throw more. So the IT load increases. but you increase the temperature so your electrical load reduces, I mean your cooling load reduces, you suddenly have a better calculation.

But in reality, your total power increase, which you don’t realize, so the PUE is better. So PUE is a bit of a, you know, thrown out word, but here is how I look at this. I think the PUE in the data hall in the white space, irrespective where you build it, is the same. Because I need liquid at a certain temperature, I need air at a certain temperature, it needs to enter the rack. The rack is doing what it is doing, it doesn’t matter whether you build in Mumbai, Singapore, I live in Dubai, you build in Timbuktu, it’s exactly the same. The question is, how do you throw the heat out? Because now that depends on the environment outside.

So are you in Singapore? Rains all the time. Are you in Iceland? It’s never more than 20 degrees any time of the year. or are you in, you know, Dubai where it reaches 52 degrees in summer. At least that’s what we designed for 52 degrees. And that’s where the different technologies need to be adopted. Now, whether it is, you know, air -cooled chillers, whether in some markets you can have, you know, water -cooled chillers. One of the unique solution sets that we have started to see is that, especially in India, you have our cities and the way we are located in between the latitudes, our thermal variation or our temperature variation during the year is different. We have very hot in the summer and we have reasonably good weather in the winter.

So there are some entitlements that you can get in the winter. So, for example, we can use chiller technologies where during the winter months we are able to use a bit more free cooling. And in the summer months or during demand months, we add a bit more of, you know, chillers. Chiller technologies, I mean DX technologies, comparator elements that come in and help us to add that extra cooling factor when required. and so what we could do is optimize the way we cool across the thermal cycles of the year and bring down the annual PUEs of the year because at the highest point of temperature you will need that cooling whether you like it or not and so it’s this management of PUE through thermal cycles and some optimization also through load cycles because load also especially in the AI world may not be like a cloud business uniform throughout the year, throughout the day, through every month and so again certain optimizations in how you use your CDUs or fan wall units to bring that energy down will help us to improve the PUE.

Srikanth Cherukuri

One thing I would say about that is the design is there, right? Based on whether it’s the water temperatures, we’re all designing to the same targets. The design is there. Where it becomes extremely manual is again we’re still, in the traditional mode of operation in data centers where we have a large control room and we are optimizing for… for uptime and safety, and safety in the sense there’s no risk of downtime. We’re very risk -averse. But we haven’t, even if we have to do what Sanjay just suggested, which is optimize that, there is no automated way of doing that because the chip -level telemetry doesn’t talk to the data center -level telemetry. And that’s what NVIDIA’s reference design is looking to change today, is, again, if you were to retrofit a brownfield facility, this will be harder.

But if you were to build a purpose, of course, this is an opportunity for India, if you’re building an AI factory today, there is no reason why you can’t integrate telemetry from chip to chip. There’s no reason why you cannot simulate how to optimize that and simulate a traditional sample workload and see how you save energy. I’m sure that simulation will tell you that you’ll save a ton of energy without any human intervention.

Moderator

You spoke about retrofit. So there have been normal cloud services or normal workloads have been working, let’s say about 5, 10, 15, 15 kilowatt of load. what do you see when it comes to AI augmentation or a GP augmentation in a same platform or a same aisle how the retrofit will be easy or difficult or what could be your one or two tips to do that like if you are talking about AI optimization for telemetry specifically there is already an existing workload which is working with a very small medium to small densities but in that row you want to put a GPU or a liquid could GPU or air GPU which means you are retrofitting some amount of passive infrastructure how difficult or easy would be that actually?

Srikanth Cherukuri

I think again if you go back to that journey even the design and the retrofit was extremely commercial even today I think at enterprise level it is extremely difficult if I was an enterprise CTO looking to deploy AI compute I might actually and I look at our experience in the last one year I might actually be a little you’re looking at a very cumbersome everywhere from design to following local regulations for the high power and liquid cooling having secondary loop built out that’s going to be that could be pretty scary at the end of the day but I think what Verte was doing for example with smart runs you know fully integrated mechanical electrical system that can be purpose built for any pod size that can track our you know our most scalable reference designs I think that would be the way to go right like that’s the importance why even Jigar mentioned that you know our following our reference design as closely as possible all these innovative designs and offerings will improve the adaptability part for the future change is what I can say

Moderator

my last question to you the future of the the future of the the future of the there seems to have some NVIDIA -ready design offerings or certification offerings. Would you like to say, talk, or would you like to give some insight about that? Certification programs of NVIDIA -ready data centers. But NVIDIA -ready designs.

Srikanth Cherukuri

Yeah, I think whether it’s a colo or whether it’s at a cloud scale, an NCP scale, what we’ve been doing from the beginning is, you know, just like we’ve been enabling other partners, we’ve been enabling a lot of colo partners to build NVIDIA -ready data centers. Okay. And that optimizes for, you know, the water temperatures that we’re recommending, the port sizes that we’re recommending, the redundancy that we’re recommending, the integration between telemetry that we’re recommending. So for the partners that have followed that design, we have, you know, whether it’s DGX -ready or NVIDIA -ready. Now, the only thing I would encourage these partners and also those who are looking forward to this vertical, who is actually doing that?

at speed of light, in a sense. Like, you know, a lot of the data center industry is still in the mode of, you know, they’re thinking more like real estate developers, you know, waiting for, for example, you know, you have these tranches of data centers that you’re purpose -building for everyone. That is a traditional way of thinking and saying, I’m giving this space, this cage to you, and I’m going to build it out the way you want it. But you can’t wait. Like, the way the industry is operating, no one can wait for that, right? So the partners who are building purpose -built AI factories, they are part of, or want to be part of that future, building at large scale, and then whether they give those tranches or not, but they’re built on NVIDIA design, so when the customer comes, you already have built basically according to the specs.

Moderator

That’s really insightful. Many of our Colo customers will take good insight from that. With this, I would come to an audience for any other question for them.

Audience

Hi, I’m Dal Bhanushali. Thanks for the talks, this one and the previous one. We have been talking about how we will scale India in the future. We also need to scale the talent. I wanted to get some viewpoints from you, from your experiences as we double capacities. You also need those people to run the data centers. We need DC ops as special. We can run the NVIDIA optimized containers in our laptops, but those word -to -you chillers, those skills are not common and cannot be easily taught in schools today. So what’s the plan? How do you think we should be going in the future? Especially double every year. is a huge challenge, right?

Moderator

So I’ll just take this question for a while. So at Vati, we realized this challenge much ahead of time, and we started with a lot of skill development program. So the first thing first is about operation and management of the infrastructure, okay? That’s something which we have started with in collaboration with Indian Institute of Technology, Chennai, where Diploma and BTEC are graduate engineers. We train them for managing how to manage operation and maintenance of data centers. That’s about eight to 12 weeks program, extensive programs, off -site as well as on -site. So this is one part. And there are many other programs which are on the cards to develop design, engineering, and many other things, actually.

That’s what I can tell. And these programs are already available on the web. Anybody can have a look and enroll for that, okay? Any other thing which maybe anybody would like to say about skill development or any other thing which maybe anybody would like to say about skill development or any other development activity which NVIDIA would like to do with the need of ours when we are scaling so high?

Srikanth Cherukuri

I think that’s a question you also want to have. Could you repeat the last part of the question, if you don’t mind?

Moderator

So he’s asking about how the scale is going up. There’s a lot of resources required, and the skill development is also a big challenge. So while NVIDIA is taking care of an operation and management piece, we are developing a lot of people through colleges and engineering institutions. What are the initiatives NVIDIA also…

Sanjay Kumar Sainani

managing how to manage operation and maintenance of data centers. That’s about 8 to 12 weeks program, extensive programs, off -site as well as on -site. So this is one part. And there are many other programs which are on the cards to develop design, engineering, and many other things actually. That’s what I can tell. And these programs are already available on the web. Anybody can have a look and enroll for that. Any other thing which maybe NVIDIA would like to say about skill development activity which NVIDIA would like to do with the need of ours when we are scaling so high? I think that’s a question you also want to have.

Srikanth Cherukuri

Could you repeat the last part of the question if you don’t mind?

Moderator

So he’s talking about scale is going up. There’s a lot of resources required. And the skill development is also a big challenge. So while NVIDIA is taking care of an operation and management piece, we are developing a lot of people through colleges and engineering institutions. What are the initiatives NVIDIA is also taking to develop the skills within the ecosystem?

Srikanth Cherukuri

Yeah. I think a couple of things I would say. One is, you know, as you keep going up on the scale, the prefab systems that Vertebra is developing are going to be absolutely critical because, like, when I was talking about the enterprise -level difficulties right now, all that can be solved with. But it’s, you know, a lot of times you’re waiting for the data center, you’re waiting for the data hole to get ready before you can deploy the compute systems. Yeah. And each of them have dependencies on each other that all are centered around that space, right? When you’re doing off -site prefab integration, you’re doing prefab, you know, manufacturing, you can do that all in parallel.

You can do that all at scale in parallel and then bring it all into one place. And in the meantime, you could do the testing off on the factory. A lot of the testing is done today in the data hall. So you could avoid all that, move it all to the left by bringing it all outside of the data hall and then bring it all into the data hall once the data hall is ready, once the shell is built up, and you could really condense that build -out.

Sanjay Kumar Sainani

Srikant, as you say it rightly, as we are taking a lot of activity, it is supposed to happen on -site, taking to off -site, which by means of pre -engineering it, developing and building at the scale and getting deployed at the site. So that’s a way forward. Any more questions? Otherwise, we can hold it here.

Related ResourcesKnowledge base sources related to the discussion topics (27)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The session framed a shift from traditional data‑centres to purpose‑built, gigawatt‑scale AI factories that centre on GPU‑centric design and the Vertiv‑NVIDIA partnership.”

The knowledge base notes that India is planning roughly 16 GW of data-centre capacity and that this represents a fundamental shift from conventional, lower-density facilities to gigawatt-scale installations, matching the report’s description of a new AI-factory paradigm [S84] and the broader infrastructure shift discussed in [S1].

Confirmedhigh

“Google is expanding its Indian data‑centre footprint, including a new large‑scale facility in Vizag, to keep data residency and AI workloads inside national borders.”

S94 details a 1‑GW hyperscale data centre being built by Google in Visakhapatnam (Vizag) as part of the ‘AI City Vizag’ initiative, confirming the existence of a new large‑scale facility aimed at localising data and AI workloads.

Confirmedmedium

“Google announced a $15 billion AI investment in India and highlighted Vizag as a future global AI hub.”

S95 reports that Google’s CEO announced a $15 bn AI push in India and described Visakhapatnam’s transformation into a global AI hub, corroborating the claim about the scale of investment and the strategic importance of Vizag.

Additional Contextmedium

“India’s AI sovereignty strategy rests on data, infrastructure, and talent pillars, and the country is mobilising more than 38,000 GPUs as public infrastructure.”

S88 outlines the three‑pillar sovereignty framework (data, infrastructure, talent) and S101 adds that India is deploying over 38 000 GPUs as part of its public AI infrastructure, providing additional detail to the report’s sovereignty ambition.

Additional Contextlow

“AI‑factory architecture should start from the GPU chip and expand outward, creating modular GPU pods that can be replicated at scale.”

S1 discusses how AI workloads now demand dramatically higher power per rack and a redesign of data‑centre infrastructure around GPUs, offering technical context that supports the reported emphasis on GPU‑centric, modular design.

External Sources (103)
S1
From KW to GW Scaling the Infrastructure of the Global AI Economy — Speakers:Audience, Moderator, Srikanth Cherukuri Speakers:Peter Panfil, Sanjay Kumar Sainani, Srikanth Cherukuri Speak…
S2
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Audience- Moderator- Srikanth Cherukuri – Peter Panfil- Sanjay Kumar Sainani- Srikanth Cherukuri – Srirang Deshpande…
S3
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S4
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S5
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S6
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Sudeesh VC Nambiar
S8
Building the Next Wave of AI_ Responsible Frameworks & Standards — This panel discussion at the Global AI Summit focused on reimagining responsible AI and balancing rapid innovation with …
S9
https://app.faicon.ai/ai-impact-summit-2026/from-kw-to-gw-scaling-the-infrastructure-of-the-global-ai-economy — To discuss this, I have two friends, two industry veterans from Vertiv and NVIDIA to discuss the Fireside Chat. So we ha…
S10
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Nitin Gupta- Peter Panfil – Peter Panfil- Sanjay Kumar Sainani – Peter Panfil- Srikanth Cherukuri- Sanjay Kumar Sain…
S12
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Jigar Halani- Nitin Gupta – Peter Panfil- Jigar Halani- Sanjay Kumar Sainani
S13
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S14
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S15
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S16
From KW to GW Scaling the Infrastructure of the Global AI Economy — -Srirang Deshpande- Part of strategy for India, managing Vertiv strategy and market development
S17
From KW to GW Scaling the Infrastructure of the Global AI Economy — Speakers:Srirang Deshpande, Peter Panfil, Srikanth Cherukuri
S18
From KW to GW Scaling the Infrastructure of the Global AI Economy — -Sanjay Kumar Sainani- Senior Vice President, Technical Business Development at Vertiv, 35+ years experience in leadersh…
S19
https://dig.watch/event/india-ai-impact-summit-2026/from-kw-to-gw-scaling-the-infrastructure-of-the-global-ai-economy — He’s one of the guiding principles to implement a lot of large -scale data centers for Vertiv or all the entire ecosyste…
S20
From KW to GW Scaling the Infrastructure of the Global AI Economy — He’s one of the guiding principles to implement a lot of large -scale data centers for Vertiv or all the entire ecosyste…
S21
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Sudeesh VC Nambiar
S22
From KW to GW Scaling the Infrastructure of the Global AI Economy — Speakers:Ankush Sabharwal, Sudeesh VC Nambiar
S23
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 5- Sudhakar Gandhey, Former Senior Director at American Express Bank, built Access Cadets Technologies …
S24
From KW to GW Scaling the Infrastructure of the Global AI Economy — -Akanksha Swarup- Moderator/Host conducting interviews and panel discussions
S26
The reality of science fiction: Behind the scenes of race and technology — ‘Every desireis an endand every endis a desirethenthe end of the worldis a desire of the worldwhat type of end do you de…
S27
IGF Retrospective – Past, Present, and Future — – **Nitin Desai** – Role/Title: Former MAG chair (approximately 5 years), chaired the working group on Internet governan…
S28
From KW to GW Scaling the Infrastructure of the Global AI Economy — Google’s Nitin Gupta reinforced this collaborative approach to sovereignty, emphasising that “sovereignty and innovation…
S29
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Jigar Halani- Nitin Gupta
S30
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Namaste, Your Excellencies. Thank you so much for organizing this great event. It’s a great honor for Austria to be here…
S31
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — An audience member emphasized that human diversity extends beyond geographic, language, and cultural differences to incl…
S32
IGF 2024 Global Youth Summit — Margaret Nyambura Ndung’u: Thank you, Madam Moderator. Good morning, good afternoon, and good evening to all of the di…
S33
The Foundation of AI Democratizing Compute Data Infrastructure — So as we come to the end of our panel, with everything that’s been said, even with all the money on the table, free mone…
S34
Dynamic Coalition Collaborative Session — The speaker emphasizes that developers should adopt a user-centric approach when implementing AI systems, considering a …
S35
Leaders TalkX: ICT application to unlock the full potential of digital – Part I — Development | Sociocultural Technology complexity for development solutions Technology must address real-world problem…
S36
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “An interesting fact is that most of the AI models in the world work in English”[41]. “But your AI model works in Indian…
S37
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — This discussion between Amish Devagon and Ankush Sabharwal, co-founder of a company developing Bharat GPT, focuses on In…
S38
Day 0 Event #172 Major challenges and gaps in intelligent society governance — Sam Daws: Thank you very much. It’s a great pleasure to be here today with you all. I’ve only got 10 minutes, so I’m…
S39
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Africa is one of the most energy -constrained regions. It’s also a continent where adoption is becoming very frequent. W…
S40
The Foundation of AI Democratizing Compute Data Infrastructure — Garg advocates for developing smaller, domain-specific, and niche models rather than large language models. These specia…
S41
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Evidence:He explains that ‘what used to be thought of as a server at a time is becoming really an AI pod, an AI unit at …
S42
Efforts to improve energy efficiency in high-performance computing for a Sustainable Future — The demand for high-performance computing (HPC) has surged due to technological advancements like machine learning, geno…
S43
AI and Data Driving India’s Energy Transformation for Climate Solutions — Crucially, the research revealed that cooling has become a private adaptation strategy, with over 40% of comfortable res…
S44
Greening digital companies: — Data centres consume significant amounts of electricity to power servers and keep them cool. Companies ope…
S45
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Skills Gap and Workforce Development: Addressing the persistent AI skills shortage through public-private partnerships, …
S46
Empowering Workers in the Age of AI — Current AI models suffer from significant bias because they are trained primarily on data from developed countries and h…
S47
Ethical governance at centre of Africa AI talks — Ghana is set tohostthe Pan African AI and Innovation Summit 2026 in Accra, reinforcing its ambition to shape Africa’s di…
S48
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Shri Sushil Pal:Thank you, Professor Jalasi, and thank you, UNESCO, for inviting me here. I must commend UNESCO on the r…
S49
AI for Good Technology That Empowers People — Brijesh Lal argues that while foundation models that solve all global problems may not be easily achievable, context-spe…
S50
From KW to GW Scaling the Infrastructure of the Global AI Economy — Sainani’s criticism of PUE as an ‘abused word’ in the industry is unexpected, as PUE is typically considered a standard …
S51
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Infrastructure Constraints and Resource Management: Significant focus on three critical bottlenecks – power consumption …
S52
From KW to GW Scaling the Infrastructure of the Global AI Economy — However, the discussion revealed complexities in measuring and optimising energy efficiency in AI environments. Traditio…
S53
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Instead of the Silicon Valley mantra of rapid deployment followed by fixing problems later, AI development should adopt …
S54
Driving Indias AI Future Growth Innovation and Impact — Summary:The main areas of disagreement center around regulatory approach (light-touch vs. balanced frameworks), implemen…
S55
Indias AI Leap Policy to Practice with AIP2 — Explanation:This unexpected disagreement emerges around the pace of AI deployment. Fred emphasizes the dual nature of AI…
S56
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S57
From KW to GW Scaling the Infrastructure of the Global AI Economy — Sovereignty and innovation must run together, not as competing choices, with Google building data centers in India while…
S58
From KW to GW Scaling the Infrastructure of the Global AI Economy — Gupta argues that sovereignty and innovation should be complementary rather than competing priorities. Google addresses …
S59
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S60
International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109 — Matthew Liao:Thank you, Kyung. So hi, everybody. Sorry, I couldn’t be there in person, but I’m very honored and delighte…
S61
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Evidence:He explains that ‘what used to be thought of as a server at a time is becoming really an AI pod, an AI unit at …
S62
Efforts to improve energy efficiency in high-performance computing for a Sustainable Future — The demand for high-performance computing (HPC) has surged due to technological advancements like machine learning, geno…
S63
Open Forum #33 Building an International AI Cooperation Ecosystem — Dai Wei: Distinguished guests, ladies and gentlemen, good day to you all. I’m delighted to join you in this United Natio…
S64
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Overall Tone:The tone is consistently optimistic, visionary, and inspirational throughout. The speaker maintains an enth…
S65
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — The tone is consistently optimistic, visionary, and inspirational throughout. The speaker maintains an enthusiastic and …
S66
The Global Power Shift India’s Rise in AI & Semiconductors — The discussion maintained an optimistic and forward-looking tone throughout, with speakers expressing confidence in Indi…
S67
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — The discussion maintained a predominantly optimistic and forward-looking tone throughout, despite acknowledging signific…
S68
AI Collaboration Across Borders_ India–Israel Innovation Roundtable — The discussion maintained a consistently positive, collaborative, and forward-looking tone throughout. It began with cer…
S69
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S70
Survival Tech Harnessing AI to Manage Global Climate Extremes — The discussion maintained an optimistic and collaborative tone throughout, with participants showing enthusiasm for AI’s…
S71
AI in Mobility_ Accelerating the Next Era of Intelligent Transport — The discussion maintained a serious, urgent tone throughout, driven by the gravity of India’s road safety crisis. While …
S72
Indias AI Leap Policy to Practice with AIP2 — The discussion maintained a constructive and collaborative tone throughout, with speakers building on each other’s point…
S73
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — The discussion maintained an optimistic and collaborative tone throughout, with speakers consistently emphasizing human …
S74
WS #460 Building Digital Policy for Sustainable E Waste Management — The discussion maintained a professional, collaborative, and solution-oriented tone throughout. Speakers were constructi…
S75
Panel Discussion Data Sovereignty India AI Impact Summit — The tone was collaborative and pragmatic throughout, with panelists sharing real-world experiences and solutions rather …
S76
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S77
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — The discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insight…
S78
Safe Smart Cities and Climate Frustration — The discussion maintained a collaborative and solution-oriented tone throughout. Speakers were optimistic about the pote…
S79
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S80
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S81
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S82
The role of standards in shaping an AI-driven future — The tone is consistently formal, authoritative, and optimistic throughout. The speaker maintains a confident and promoti…
S83
Law, Tech, Humanity, and Trust — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. The speakers demon…
S84
Indias Roadmap to an AGI-Enabled Future — Shri Ghanshyam Prasad outlined the massive energy transformation required to support India’s AI infrastructure. The coun…
S85
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Gaming and best actually because the large batteries requires the same amount of cooling so the way I see innovation hap…
S86
Welcome Address — Overall Tone:The tone is consistently optimistic, visionary, and confident throughout the speech. Modi maintains an insp…
S87
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Overall Tone:The tone is consistently visionary, authoritative, and optimistic throughout. The speaker maintains an insp…
S88
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — India’s approach, according to the speaker, centers on three pillars of sovereignty: data sovereignty, infrastructure so…
S89
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S90
Any other business /Adoption of the report/ Closure of the session — Despite the hard work acknowledged, the Dominican Republic’s representative noted that their delegation’s expectations w…
S91
WS #232 Innovative Approaches to Teaching AI Fairness & Governance — It set the tone for discussing AI fairness as a dynamic, evolving issue rather than a static technical problem. This fra…
S92
(Day 1) General Debate – General Assembly, 79th session: morning session — These key comments shaped the discussion by highlighting critical challenges to the current international order, particu…
S93
Day 0 Event #260 Securing Basic Internet Infrastructure — National sovereignty or autonomy and international cooperation are not mutually exclusive but must operate in parallel. …
S94
‘AI City Vizag’ moves ahead with ₹80,000-crore Google hyperscale campus in India — Andhra Pradesh will sign an agreement with Google on Tuesday for a1-gigawatt hyperscale data centrein Visakhapatnam. Off…
S95
Google plans $15bn AI push in India — Google CEO Sundar Pichaisaidat the India AI Impact Summit 2026 in New Delhi that he never imagined Visakhapatnam would b…
S96
Private AI Compute by Google blends cloud power with on-device privacy — Googleintroduced Private AI Compute,a cloud platform that combines the power of Gemini with on-device privacy. It delive…
S97
Google boosts AI coding and video skills with Gemini 2.5 Pro — Google hasunveiled Gemini 2.5 Pro Preview(I/O edition), its latest AI model update, ahead of the annual I/O developer co…
S98
Google opens Gemini Nano AI to Android developers — Google’s Gemini Nano, a powerful on-device AI model, is nowavailablefor developers to integrate into their apps through …
S99
Gemini Robotics On-Device: Google’s AI model for offline robotic tasks — On Tuesday, 24 June, Google’s DeepMind division announced the release of a new large language model namedGemini Robotics…
S100
https://dig.watch/event/india-ai-impact-summit-2026/regulating-open-data_-principles-challenges-and-opportunities — Digital ecosystems simply do not function in silos. However, enabling data to move across borders should not mean that c…
S101
https://app.faicon.ai/ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — And here, India is not waiting for permission. India is not waiting for permission. India is showing that it can be done…
S102
Meta and Ray-Ban launch smart glasses in the UAE — Meta Platforms, Inc. and EssilorLuxottica haveofficially launched the Ray-Ban Meta smart glassesin the United Arab Emira…
S103
Ad Hoc Consultation: Friday 2nd February, Morning session — There is an implication that no inherent right should exist to possess or disseminate such materials, signalling the nee…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Srikanth Cherukuri
3 arguments171 words per minute2202 words770 seconds
Argument 1
Future‑proof AI‑factory design must start from the GPU cluster and consider row‑ or data‑hall‑level density rather than individual rack density.
EXPLANATION
Srikanth argues that designing AI data centers around the GPU from the outset avoids costly retrofits and ensures scalability as power and compute demands grow.
EVIDENCE
He explains that the industry should stop thinking about rack density and instead think about row and data-hole level density, using a bounding-box approach and reference designs that can be simulated with digital twins to avoid redesigns for each new generation of GPUs [633-658].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from rack-level to pod and data-hall level density planning and the inside-out, GPU-first design approach are described in [S1] and [S2].
MAJOR DISCUSSION POINT
Design from GPU upwards
DISAGREED WITH
Peter Panfil
Argument 2
Integrating chip‑level telemetry with data‑center telemetry is essential for automated energy‑efficiency optimisation.
EXPLANATION
Srikanth notes that current data‑center operations lack automated optimisation because chip telemetry does not communicate with facility telemetry, and NVIDIA’s reference designs aim to bridge this gap.
EVIDENCE
He points out the absence of automated optimisation due to missing chip-to-data-center telemetry integration and describes NVIDIA’s reference design that will enable such integration [739-748].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Integrated chip-to-data-center telemetry for thermal and load-cycle management is highlighted in [S2].
MAJOR DISCUSSION POINT
Telemetry integration for efficiency
Argument 3
Prefabricated, modular systems are critical to achieve speed‑at‑scale for AI factory deployment.
EXPLANATION
Srikanth stresses that off‑site prefabrication of mechanical and electrical systems allows parallel manufacturing and testing, dramatically reducing build‑out time for AI data centers.
EVIDENCE
He describes how prefabricated systems enable parallel off-site manufacturing, testing, and rapid on-site integration, thereby condensing the build-out timeline [811-818].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prefabricated off-site manufacturing that accelerates build-out and helps address skill gaps is discussed in [S1] and [S2].
MAJOR DISCUSSION POINT
Prefabrication for rapid deployment
DISAGREED WITH
Peter Panfil
A
Audience
2 arguments128 words per minute418 words195 seconds
Argument 1
AI should be inclusive and not replace human agency, especially for vulnerable and rural populations.
EXPLANATION
Audience members raise concerns that AI might become autonomous and marginalise humans, and they stress the need for AI to serve under‑represented groups and bridge the digital divide.
EVIDENCE
One audience member asks whether AI and humans can coexist in the same niche and worries about AI developing its own subconsciousness, while another stresses the importance of AI reaching disadvantaged and rural users [420-425][436-437].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive AI development, the need to bridge digital divides, and user-centric design for diverse groups are emphasized in [S30], [S31], [S34] and [S35].
MAJOR DISCUSSION POINT
Inclusivity and human‑AI coexistence
Argument 2
Scaling AI infrastructure requires parallel development of skilled talent to operate and maintain data centers.
EXPLANATION
Audience highlights the challenge of rapidly expanding AI capacity without sufficient trained personnel, calling for systematic skill‑development initiatives.
EVIDENCE
Comments note the need to double capacity each year, the shortage of DC-ops specialists, and the difficulty of teaching such skills in schools, urging a plan for talent development [767-777].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shortage of specialised DC-ops talent and the importance of systematic skill-development programmes are noted in [S1], [S2] and [S33].
MAJOR DISCUSSION POINT
Talent and skill development for AI scaling
A
Ankush Sabharwal
2 arguments172 words per minute287 words99 seconds
Argument 1
AI development should be purpose‑driven, beginning with a clear use‑case and problem definition.
EXPLANATION
Ankush stresses that AI projects must start by identifying the specific problem to solve, then selecting an appropriately sized model and data source, ensuring relevance and trust.
EVIDENCE
He outlines the “AI with purpose and trust” tagline, describes beginning with the end in mind, defining the use-case, choosing model size, and sourcing data from partners to solve enterprise problems [28-34][36-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A purpose-first, problem-driven approach and user-centric design are advocated in [S35], [S34] and [S8].
MAJOR DISCUSSION POINT
Purpose‑first AI approach
Argument 2
Bharat GPT is a family of domain‑specific models tailored for Indian enterprises rather than a consumer‑facing large language model.
EXPLANATION
Ankush explains that Bharat GPT is designed to be trained on partner data for specific sectors like railways, allowing enterprises to leverage AI without building generic consumer models.
EVIDENCE
He states that Bharat GPT is not a large consumer LLM, works with partners to train on domain data, and is aimed at solving enterprise problems such as those faced by IRCTC [36-43][44-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
BharatGPT’s focus on Indian languages and domain-specific enterprise partners is described in [S36] and reiterated in [S1].
MAJOR DISCUSSION POINT
Enterprise‑focused Indian AI models
DISAGREED WITH
Sudeesh VC Nambiar
P
Peter Panfil
4 arguments139 words per minute2977 words1275 seconds
Argument 1
AI factories must achieve speed at scale through rapid deployment of GPU pods and compute resources.
EXPLANATION
Peter argues that the competitive advantage lies in quickly building GPU infrastructure and scaling it, rather than merely increasing raw compute power.
EVIDENCE
He highlights “speed at scale,” the need for fast GPU structure deployment, pod designs that can be replicated, and the importance of accelerating both build-out and compute deployment [107-121][124-130][254-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for fast, repeatable GPU pod deployment and “speed at scale” is highlighted in [S1] and the pod-design supporting multiple GPU generations is detailed in [S2].
MAJOR DISCUSSION POINT
Speed‑at‑scale deployment
DISAGREED WITH
Srikanth Cherukuri
Argument 2
Design of AI infrastructure should start from the chip (GPU) and work outward to define power, cooling and layout requirements.
EXPLANATION
Peter proposes a chip‑first design philosophy, where the GPU dictates the most efficient and economical architecture for the data center.
EVIDENCE
He states “start at the GPU… start at the chip… define the most economical, most efficient, fastest from a compute perspective, then deploy as a pod” [118-124][119-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A chip-first, inside-out design philosophy that drives power, cooling and layout decisions is presented in [S1] and [S2].
MAJOR DISCUSSION POINT
Chip‑first design philosophy
DISAGREED WITH
Srikanth Cherukuri
Argument 3
Sustainability in AI factories is achieved by maximising token generation per watt and minimising wasted energy.
EXPLANATION
Peter emphasizes that every saved watt translates into more AI output (tokens), advocating for highly efficient designs that reduce power waste.
EVIDENCE
He discusses saving watts, delivering power efficiently to GPUs, and measuring success as tokens per watt per dollar, linking energy efficiency directly to AI productivity [259-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Measuring efficiency as tokens per watt per dollar and maximising power delivery to GPUs are discussed in [S1]; broader AI-for-climate benefits are noted in [S38].
MAJOR DISCUSSION POINT
Energy‑efficient AI output
Argument 4
AI can automate mundane, low‑level tasks, freeing human cognition for higher‑value activities.
EXPLANATION
Peter uses the analogy of breathing and blinking to illustrate how AI should handle routine functions, allowing people to focus on creative and productive work.
EVIDENCE
He compares AI to breathing and blinking, describing it as an autonomous background function that frees humans to use their brain for more important tasks [445-458].
MAJOR DISCUSSION POINT
AI as an autonomy for routine tasks
J
Jigar Halani
3 arguments170 words per minute3536 words1246 seconds
Argument 1
AI can generate substantial economic benefits in agriculture and fraud detection by processing massive, multilingual data at scale.
EXPLANATION
Jigar illustrates how AI‑driven bots handling millions of daily calls in local languages can prevent financial fraud and improve agricultural services, delivering multi‑million‑dollar savings.
EVIDENCE
He cites a bot handling 50,000 calls per day for subsidy verification, saving millions of dollars in fraud, and AI-based fraud detection in UPI transactions, as well as language services handling 100 million requests per hour across 22 official languages [298-312][312-319].
MAJOR DISCUSSION POINT
AI for economic impact in agriculture and finance
Argument 2
Robust reference designs and reliable hardware are essential to avoid costly downtime in AI factories.
EXPLANATION
Jigar warns that GPU node failures can lead to massive compute loss, emphasizing the need for well‑engineered designs that maintain SLAs and minimise financial impact.
EVIDENCE
He explains that a GPU node failure can cost thousands of dollars per hour, with checkpoint losses amounting to hundreds of thousands of dollars, underscoring the importance of reliable design and SLAs [272-281][285-291].
MAJOR DISCUSSION POINT
Minimising downtime through design
Argument 3
India should democratise AI access, positioning itself as a major consumer of foundation models while ensuring data sovereignty.
EXPLANATION
Jigar notes India’s large consumption of chat‑GPT‑like services, the upcoming DPDP law, and the need for sovereign processing of sensitive verticals, advocating for a consumer‑centric yet sovereign AI ecosystem.
EVIDENCE
He states that India is the largest consumer of AI models, mentions the DPDP law driving local processing of regulated data, and stresses that sovereign processing will boost India’s position in AI consumption [190-210][326-334].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for inclusive, sovereign AI ecosystems and the importance of local language models for national consumption are made in [S30], [S31] and [S36].
MAJOR DISCUSSION POINT
AI democratization and data sovereignty
M
Moderator
2 arguments176 words per minute1315 words447 seconds
Argument 1
Future‑proof AI data‑center design requires modular reference designs that can accommodate future GPU generations without major retrofits.
EXPLANATION
The moderator highlights the need for designs that are adaptable to evolving technology, urging speakers to suggest ingredients for such future‑proofing.
EVIDENCE
He asks the panel to identify two important ingredients for future-proof design and emphasizes the importance of bounding-box and reference-design approaches [662-664][739-748].
MAJOR DISCUSSION POINT
Future‑proof design principles
Argument 2
Skill‑development programmes in partnership with academic institutions are vital to build the workforce needed for AI infrastructure deployment.
EXPLANATION
The moderator points to existing collaborations with IIT Chennai and online training modules as examples of how to address the talent gap for AI data‑center operations.
EVIDENCE
He mentions a collaboration with IIT Chennai offering 8-12-week on-site and off-site programs, as well as other online courses available to anyone interested [779-788].
MAJOR DISCUSSION POINT
Talent development for AI scaling
S
Srirang Deshpande
1 argument124 words per minute274 words131 seconds
Argument 1
Collaboration between Vertiv and NVIDIA is essential to meet India’s gigawatt‑scale AI infrastructure challenges.
EXPLANATION
Srirang introduces the partnership, noting that combining Vertiv’s data‑center expertise with NVIDIA’s AI hardware can address the rapid scaling needs of AI factories in India.
EVIDENCE
He outlines that two companies are planning together to tackle gigawatt infrastructure challenges and introduces NVIDIA’s Jigar and Vertiv’s Peter as the panel to discuss AI factories [68-84][75-84].
MAJOR DISCUSSION POINT
Vertiv‑NVIDIA partnership for AI infrastructure
S
Sanjay Kumar Sainani
3 arguments170 words per minute2078 words733 seconds
Argument 1
Infrastructure must be built to span multiple IT refresh cycles, using modular pod designs that can be reconfigured for new GPU generations while keeping power and cooling systems constant.
EXPLANATION
Sanjay argues that because semiconductor cycles are faster than physical infrastructure cycles, data‑center designs should allow 2‑3 IT upgrades within a single infrastructure lifespan.
EVIDENCE
He explains the mismatch between 3-5-year IT cycles and 10-15-year infrastructure cycles, and describes modular pods that can be re-configured for new GPUs without changing the surrounding power and cooling plant [672-699].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Modular pod designs that support three successive GPU generations without changing power/cooling infrastructure are described in [S1] and [S2].
MAJOR DISCUSSION POINT
Modular pod design for multi‑cycle upgrades
Argument 2
Energy efficiency can be enhanced by adapting cooling strategies to local climate conditions and optimising thermal cycles throughout the year.
EXPLANATION
Sanjay highlights that PUE can be improved by raising ambient temperature in cooler months, using free cooling in winter, and adding chillers in summer, tailoring solutions to regional climates.
EVIDENCE
He discusses raising temperature to improve PUE, using free cooling in winter, adding DX chillers in summer, and managing thermal cycles to lower annual PUE [710-738].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Thermal-cycle management, free-cooling in winter and adaptive chillers for summer are outlined in [S2].
MAJOR DISCUSSION POINT
Climate‑adapted cooling for better PUE
DISAGREED WITH
Srikanth Cherukuri
Argument 3
Comprehensive skill‑development programmes, including on‑site and off‑site training with engineering institutes, are required to create the talent pool for AI data‑center operations.
EXPLANATION
Sanjay stresses the need for structured training initiatives to develop operational, design, and engineering expertise for AI‑focused data centres.
EVIDENCE
He mentions 8-12-week programmes in collaboration with Indian institutes, both on-site and off-site, and notes that these resources are publicly available online for anyone to enroll [796-804].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Structured training collaborations with IITs and the broader need for talent in AI data-centres are highlighted in [S33], [S1] and [S2].
MAJOR DISCUSSION POINT
Training programmes for AI data‑center workforce
S
Sudeesh VC Nambiar
2 arguments136 words per minute205 words90 seconds
Argument 1
AI is deployed to detect and mitigate automated ticket‑booking bots on the IRCTC platform, addressing severe demand‑supply mismatches.
EXPLANATION
Sudeesh explains that peak booking times experience massive bot traffic, and AI models are used to identify and block such automated tools.
EVIDENCE
He describes the mismatch of demand and supply during peak ticket-booking windows, the misuse of automated tools, and the ongoing cat-and-mouse game where AI is employed to counter bots [16-18].
MAJOR DISCUSSION POINT
AI for ticket‑booking bot mitigation
Argument 2
An indigenous AI layer, developed in collaboration with Indian startups, is integrated with global technology to enhance data analysis and monitoring for IRCTC.
EXPLANATION
Sudeesh notes that a home‑grown AI component works alongside a global solution, with startups providing continuous social‑media monitoring and model training.
EVIDENCE
He mentions a layer of indigenous AI, a startup performing data analysis and social-media monitoring, and the overall collaboration between Indian startups and a global technology provider [21-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The partnership between Indian startups and global AI providers to deliver indigenous models for high-traffic services is discussed in [S36].
MAJOR DISCUSSION POINT
Indigenous‑global AI collaboration
DISAGREED WITH
Ankush Sabharwal
A
Akanksha Swarup
2 arguments166 words per minute321 words115 seconds
Argument 1
AI initiatives must be inclusive, ensuring that under‑privileged and rural populations have access to AI‑enabled services.
EXPLANATION
Akanksha raises concerns about digital inclusion, referencing the Prime Minister’s emphasis on inclusivity and asking how Google plans to bridge the divide.
EVIDENCE
She asks Nitin how Google is addressing inclusivity for the under-privileged and rural areas, citing the Prime Minister’s concern about digital divide [50-53][47-55].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive AI policies, the need to bridge the digital divide, and user-centric design for underserved communities are emphasized in [S30], [S31], [S34] and [S35].
MAJOR DISCUSSION POINT
Inclusivity for underserved communities
Argument 2
The use of indigenous AI models in critical Indian services like IRCTC should be clarified and expanded.
EXPLANATION
Akanksha queries whether locally developed AI models are being employed for high‑traffic platforms such as IRCTC, emphasizing the importance of home‑grown technology.
EVIDENCE
She asks directly if any indigenous models are being used for IRCTC’s peak-period AI handling and follows up on the broader question of indigenous model usage [15][20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of sovereign, locally-developed AI models for national services such as IRCTC is highlighted in [S36].
MAJOR DISCUSSION POINT
Indigenous AI for national services
N
Nitin Gupta
2 arguments140 words per minute366 words156 seconds
Argument 1
AI sovereignty and innovation must be pursued together; Google’s on‑premise “Data Box” enables Indian customers to keep data and AI services within national borders.
EXPLANATION
Nitin explains that Google’s indigenous Data Box provides full AI capabilities inside the customer’s premises, ensuring data residency while delivering Google Gemini services.
EVIDENCE
He describes Google’s Indian data centres, the announcement of Vizag data centres, the creation of an indigenous Data Box that stays on-premise, runs Google Gemini AI, and gives customers full hardware control [8-14].
MAJOR DISCUSSION POINT
On‑premise AI for data sovereignty
Argument 2
Google promotes digital inclusivity by offering free AI‑powered educational tools, such as Gemini mock exams for JEE preparation, to students across India.
EXPLANATION
Nitin cites Sundar Pichai’s announcement that JEE mock exams will be available on Gemini at no cost, illustrating Google’s commitment to making AI accessible to learners.
EVIDENCE
He mentions Sundar Pichai’s announcement that JEE mock exams are free on Gemini for any student, framing it as an example of inclusive AI provision [60-62].
MAJOR DISCUSSION POINT
Free AI education for inclusivity
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Differences
Different Viewpoints
Emphasis on rapid deployment versus careful future‑proof design of AI factories
Speakers: Peter Panfil, Srikanth Cherukuri
AI factories must achieve speed at scale through rapid deployment of GPU pods and compute resources. Design of AI infrastructure should start from the chip (GPU) and work outward to define power, cooling and layout requirements. Future‑proof AI‑factory design must start from the GPU cluster and consider row‑ or data‑hall‑level density rather than individual rack density. Prefabricated, modular systems are critical to achieve speed‑at‑scale for AI factory deployment.
Peter stresses that the competitive edge lies in building GPU pods as fast as possible and scaling quickly, even if technology may become outdated soon [107-121][124-130]. Srikanth argues that designs should begin with the GPU cluster but must be future-proof, using row-level density planning, bounding-box approaches and reference designs to avoid costly retrofits [633-658][665-669]. Both aim to scale AI infrastructure, but differ on whether speed or long-term adaptability should be prioritized.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension mirrors policy debates that invoke the precautionary principle for AI systems, as advocated in recent forums such as the Open Forum on sustainable digital economies [S53] and is reflected in national AI strategies that balance rapid rollout with safeguards, e.g., India’s AI policy discussions on light-touch versus balanced frameworks [S54] and the pace-vs-risk dialogue highlighted by experts [S55].
Methodology for improving data‑center energy efficiency (PUE) and the relevance of the metric
Speakers: Sanjay Kumar Sainani, Srikanth Cherukuri
Energy efficiency can be enhanced by adapting cooling strategies to local climate conditions and optimising thermal cycles throughout the year. PUE is a bit of an abused word; it can be gamed by raising ambient temperature, which may hide higher IT power consumption.
Sanjay claims that PUE can be artificially improved by increasing inlet temperature, but warns that this may increase IT power use, suggesting the metric is often misleading [710-719]. Srikanth points out the lack of automated optimisation because chip-level telemetry does not communicate with data-center telemetry, and promotes integrated telemetry to achieve genuine energy savings [739-748]. The disagreement centers on whether PUE is a useful metric versus the need for deeper telemetry-driven optimisation.
POLICY CONTEXT (KNOWLEDGE BASE)
Energy-efficiency guidelines for data centres often rely on PUE, yet recent analyses question its adequacy; critiques label PUE as potentially misleading when simple temperature tweaks improve scores without reducing total power use [S52], and industry voices describe it as an abused metric [S50]. Broader discussions on power-consumption bottlenecks in AI infrastructure also highlight the need for robust methodologies [S51].
Extent of reliance on indigenous AI models versus global solutions for critical services
Speakers: Sudeesh VC Nambiar, Ankush Sabharwal
An indigenous AI layer, developed in collaboration with Indian startups, is integrated with global technology to enhance data analysis and monitoring for IRCTC. Bharat GPT is a family of domain‑specific models tailored for Indian enterprises rather than a consumer‑facing large language model.
Sudeesh describes a hybrid approach where a home-grown AI layer works alongside a global solution, with startups providing data analysis and social-media monitoring [21-25]. Ankush emphasizes that Bharat GPT is built on partner data for specific domains and is not a generic large LLM, suggesting a more partner-centric, possibly less globally dependent model [36-43]. The speakers differ on how much indigenous development should be emphasized versus leveraging global AI capabilities.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs emphasize building AI on local data to avoid systemic bias inherent in models trained on developed-world datasets [S46] and to meet ethical governance goals outlined in African AI summits that stress indigenous solutions [S47]. Technical arguments also support context-specific edge models as more effective for regional tasks [S49].
Unexpected Differences
Validity and usefulness of the PUE metric
Speakers: Sanjay Kumar Sainani, Srikanth Cherukuri
Energy efficiency can be enhanced by adapting cooling strategies to local climate conditions and optimising thermal cycles throughout the year. PUE is a bit of an abused word; it can be gamed by raising ambient temperature, which may hide higher IT power consumption.
Both experts discuss energy efficiency, yet Sanjay treats PUE as a manipulable metric that can be superficially improved, whereas Srikanth dismisses PUE’s reliability and advocates for telemetry-driven optimisation. This clash of perspectives on a core performance indicator was not anticipated given their overlapping technical domains [710-719][739-748].
POLICY CONTEXT (KNOWLEDGE BASE)
Critiques from industry experts argue that PUE can be gamed, with improvements sometimes increasing overall consumption, labeling it an ‘abused word’ [S50] and highlighting its potential to mislead when temperature adjustments inflate scores without real efficiency gains [S52].
Overall Assessment

The panel largely converged on the importance of AI‑driven infrastructure, inclusivity, and skill development. The most notable divergences concerned the balance between rapid deployment and future‑proof design, the proper methodology for energy‑efficiency measurement, and the degree to which indigenous AI models should be used versus global solutions.

Moderate – while participants share common objectives (scaling AI factories, fostering inclusivity, building talent), they differ on implementation priorities and technical metrics. These moderate disagreements suggest that consensus on strategic direction exists, but detailed policy and engineering choices will require further negotiation to align speed, sustainability, and sovereignty goals.

Partial Agreements
Both speakers agree that AI infrastructure should be built around the GPU and that modular, prefabricated designs are essential for scaling. However, Peter prioritises speed of deployment above all, while Srikanth stresses the need for designs that remain adaptable to future GPU generations and avoid retrofits. Their shared goal is rapid, large‑scale AI deployment, but they diverge on the balance between immediacy and long‑term flexibility [107-121][124-130][633-658][665-669].
Speakers: Peter Panfil, Srikanth Cherukuri
AI factories must achieve speed at scale through rapid deployment of GPU pods and compute resources. Design of AI infrastructure should start from the chip (GPU) and work outward to define power, cooling and layout requirements. Future‑proof AI‑factory design must start from the GPU cluster and consider row‑ or data‑hall‑level density rather than individual rack density. Prefabricated, modular systems are critical to achieve speed‑at‑scale for AI factory deployment.
Takeaways
Key takeaways
India aims to become a global AI hub with sovereign, purpose‑driven models such as Bharat GPT, emphasizing trust and local relevance. Sovereignty and innovation are not mutually exclusive; Google is building large data centres in India and offering on‑premise AI boxes (Data Box) that keep hardware and data under Indian control. Inclusive AI initiatives are being rolled out, e.g., free Gemini‑powered JEE mock exams to reach under‑privileged students. IRCTC uses advanced, partly indigenous AI/ML models together with Indian startups to detect and mitigate automated ticket‑booking bots during peak Tatkal periods. The industry’s strategic focus is “speed at scale”: AI factories built on GPU pods with reference designs, moving from a traditional “outside‑in” to an “inside‑out” data‑center approach. Design methodology must start from the chip (chip‑to‑grid) and use modular, future‑proof pods, bounding‑box/row‑level density, and integrated chip‑level telemetry to optimise power and cooling. Gigawatt‑scale AI infrastructure requires new cooling densities (liquid cooling, megawatt‑per‑rack designs) and climate‑aware PUE optimisation (free‑cooling in winter, adaptive chillers). Energy efficiency is tied to tighter integration of telemetry, reducing manual PUE adjustments, and leveraging India’s renewable‑energy mix. Talent and skill development are critical; programs with IIT Chennai, online courses, and prefabricated system kits are being launched to create DC‑ops, design, and engineering expertise. Collaboration between global vendors (Google, NVIDIA, Vertiv) and Indian startups/government is essential to accelerate deployment while maintaining data sovereignty.
Resolutions and action items
Google will continue expanding Indian data‑centre capacity (e.g., Vizag) and roll out the on‑premise Data Box with full Gemini AI services for customers needing stricter data residency. IRCTC will keep collaborating with Indian AI startups to enhance its AI/ML layer for ticket‑booking fraud detection and will integrate indigenous models where feasible. NVIDIA and Vertiv will provide reference pod designs (DSX, 2.4 MW, 6 MW pods) and promote NVIDIA‑ready certification to accelerate AI‑factory builds in India. Vertiv will invest in people, processes, and production capacity, including partnerships with Indian institutes for skill‑upskilling (e.g., 8‑12 week DC‑ops programs with IIT Chennai). All participants agreed to share best‑practice learnings from US/European deployments to avoid reinventing solutions in India. Commit to make AI services (e.g., Gemini‑powered JEE mock exams) freely available to under‑privileged users as part of inclusive AI outreach. Adopt a modular pod‑first design philosophy (chip‑to‑grid) for new data‑centre projects to ensure future‑proofing across multiple GPU generations.
Unresolved issues
Exact timeline and milestones for reaching gigawatt‑scale AI factory capacity in India remain unspecified. No concrete roadmap was presented for developing an indigenous Indian AI chip to complete the sovereignty stack. Details on how large‑scale data‑cleaning and model‑training pipelines will be funded, staffed, and governed were not fully addressed. The strategy for retrofitting existing low‑density data centres with high‑density GPU pods was discussed but no definitive plan was agreed upon. Scalable talent pipeline: while programs are announced, the scale needed to match exponential AI growth is still an open question. Regulatory handling for sector‑specific data (fintech, health, defence) and how sovereign AI infrastructure will satisfy those rules was not resolved.
Suggested compromises
Google’s Data Box offers a hybrid solution: global AI capabilities with hardware and data kept on‑premise, balancing sovereignty with access to world‑class innovation. IRCTC’s AI stack combines global technology strength with Indian startup analytics, providing a collaborative model rather than a purely domestic or foreign solution. Adopting reference pod designs allows customers to use proven global architectures while customizing for local power, cooling, and regulatory constraints. The “chip‑to‑grid” design approach respects existing grid limitations but still drives infrastructure decisions from the compute requirements, a compromise between infrastructure‑first and compute‑first mindsets. Using modular, future‑proof pods enables incremental upgrades (different GPU generations) without full‑scale rebuilds, reconciling fast AI evolution with slower physical‑infrastructure cycles.
Thought Provoking Comments
Sovereignty and innovation have to run together; they are not a choice between one or the other.
Challenges the common narrative that data sovereignty limits technological progress, positioning them as complementary rather than mutually exclusive.
Set the tone for the discussion on Indian AI strategy, prompting other speakers to frame their solutions (e.g., Google’s data‑centers, indigenous data boxes) as both sovereign and innovative, and opened the floor for concrete examples of how to achieve both.
Speaker: Nitin Gupta
Google’s ‘indigenous data box’ lets customers run full Gemini AI services inside their own premises, keeping hardware and data fully under their control.
Introduces a tangible product that embodies the sovereignty‑innovation blend, moving the conversation from abstract policy to concrete technology.
Triggered follow‑up questions about indigenous models and sparked discussion on how Indian enterprises can leverage such on‑premise AI, leading to deeper talks about local startups and custom model training.
Speaker: Nitin Gupta
Our tagline is ‘AI with purpose and trust’. We start with the end‑in‑mind, pick the right use‑case, then decide the model size and data source – we don’t chase a generic LLM for every problem.
Shifts focus from large, consumer‑oriented language models to a problem‑centric, enterprise‑driven approach, emphasizing practical value over hype.
Guided the conversation toward sector‑specific deployments (e.g., IRCTC, railways) and reinforced the idea that sovereign AI can be built by partnering with domain experts, influencing later remarks about data ownership and model training.
Speaker: Ankush Sabharwal
India is a consumer country – we are the world’s largest ChatGPT consumer base and will soon cross 10‑12 GW of AI compute, especially after the DPDP law brings more workloads back home.
Provides a striking quantitative perspective on India’s AI consumption and the regulatory push for domestic compute, highlighting both opportunity and urgency.
Created a turning point where the discussion moved from strategic vision to concrete capacity targets, prompting Peter and others to talk about scaling infrastructure rapidly and the need for gigawatt‑scale data centres.
Speaker: Jigar Halani
Speed at scale means we must start at the chip, not from the grid to the chip. Design AI factories as pods that can be replicated and upgraded across generations.
Reframes infrastructure planning by putting the GPU at the centre of design, introducing the ‘AI factory’ concept and the importance of reference pod designs.
Shifted the dialogue toward practical engineering tactics, leading to detailed exchanges on pod reference designs, modularity, and how to achieve rapid deployment (e.g., 6‑8 month build‑outs).
Speaker: Peter Panfil
Stop thinking about rack density; think about row and data‑hall level density – use bounding‑box, pod‑level designs that are future‑proof across multiple generations.
Challenges the traditional metric of rack‑level density and proposes a higher‑level architectural view, which is crucial for scaling to gigawatt facilities.
Prompted a deeper technical debate on design methodology, influencing Sanjay’s comments on lifecycle mismatches and the need for modular pods that can accommodate 2‑3 IT cycles within one infrastructure cycle.
Speaker: Srikanth Cherukuri
IT hardware depreciates every 3‑5 years while power‑and‑cooling infrastructure lasts 10‑15 years. We must run 2‑3 IT cycles inside one infrastructure cycle, using modular pods that can be re‑configured without rebuilding the whole plant.
Highlights a critical economic and operational tension between fast‑moving compute and slow‑moving physical plant, offering a concrete solution (modular pods) to bridge the gap.
Added a financial and operational dimension to the earlier technical discussions, reinforcing the need for modular, future‑proof designs and influencing later remarks about prefabrication and rapid deployment.
Speaker: Sanjay Kumar Sainani
AI will become like breathing and blinking – an autonomous background function that frees humans to focus on higher‑order tasks.
Provides a philosophical framing that demystifies AI fears and positions AI as an invisible productivity enhancer, moving the conversation beyond hardware to societal impact.
Broadened the scope of the panel, linking technical deployment to human experience, and set the stage for audience‑driven questions about AI’s role in daily life and ethical considerations.
Speaker: Peter Panfil (answer to audience)
Overall Assessment

The discussion was steered by a handful of pivotal remarks that moved it from abstract policy talk to concrete technical strategy and societal vision. Nitin Gupta’s sovereignty‑innovation framing and the introduction of Google’s on‑premise data box anchored the policy debate in tangible technology. Ankush Sabharwal and Jigar Halani shifted focus to purpose‑driven, consumer‑centric AI deployment and the massive scale India must achieve, which in turn motivated Peter Panfil and Srikanth Cherukuri to propose the ‘AI factory’ pod architecture and a row‑level design mindset. Sanjay Kumar Sainani’s insight on mismatched depreciation cycles added an economic urgency, while Peter’s analogy of AI as breathing linked the technical roadmap to everyday human experience. Collectively, these comments redirected the conversation toward actionable infrastructure designs, rapid scaling, and inclusive, trustworthy AI, shaping the panel into a forward‑looking roadmap rather than a purely promotional dialogue.

Follow-up Questions
What quick actions can India take to complete AI infrastructure projects within 3‑8 months, beyond project planning and BOQs?
Accelerated deployment is needed to meet the massive, time‑critical demand for AI compute capacity in India.
Speaker: Jigar Halani (to Peter Panfil)
How can chip‑level telemetry be integrated with data‑center‑level telemetry to enable automated energy optimization?
Current lack of integration prevents real‑time, AI‑driven efficiency improvements and requires research into unified telemetry frameworks.
Speaker: Srikanth Cherukuri
Which layer(s) of the AI stack can India excel in or match the US/China in upcoming years?
Identifying strategic layers (energy, infrastructure, compute, models, applications) will guide national investment and policy for competitive advantage.
Speaker: Audience member (Shlom)
What are the best practices for improving PUE across India’s diverse climate conditions and thermal cycles?
Optimizing cooling for temperatures ranging from 10‑48 °C is crucial for energy‑efficient, sustainable operation of gigawatt‑scale AI data centers.
Speaker: Sanjay Kumar Sainani
How should India scale talent and skill development for AI‑factory operations given the rapid capacity doubling?
A skilled workforce is essential to operate, maintain, and innovate on AI‑optimized infrastructure; current training pipelines are insufficient.
Speaker: Audience member (Dal Bhanushali)
What is the current percentage of progress in India’s AI infrastructure deployment (e.g., 3 %, 5 %, 10 %)?
Accurate benchmarking is needed for policy makers and investors to assess gaps and set realistic targets.
Speaker: Peter Panfil (to Jigar Halani)
What is the realistic timeline and feasibility for building gigawatt‑scale data centers in India (e.g., 5 GW vs 10 GW)?
Understanding construction timelines informs national AI strategy, financing, and supply‑chain planning.
Speaker: Audience members (multiple)
What are the challenges and recommended approaches for retrofitting existing data centers with AI workloads (GPU, liquid cooling)?
Retrofitting is a cost‑effective path for many operators, but requires guidance on power, cooling, and structural modifications.
Speaker: Srikanth Cherukuri
How can AI‑factory designs be future‑proofed across multiple generations of GPU technology?
Ensuring designs accommodate evolving GPU densities and power needs protects capital investment and accelerates upgrades.
Speaker: Srikanth Cherukuri; Sanjay Kumar Sainani
What is the relative impact of data‑cleaning time versus model‑training time on overall AI project timelines?
Quantifying this split helps allocate resources and prioritize tooling for data preparation versus compute.
Speaker: Jigar Halani
What specific initiatives is NVIDIA planning for skill development in the Indian ecosystem?
Corporate training programs aligned with industry needs will help close the talent gap for AI‑factory deployment.
Speaker: Audience (Dal Bhanushali) and Moderator
What steps are needed for India to develop its own AI chips to complete the sovereign AI stack?
Domestic chip development would reduce dependence on foreign suppliers and strengthen AI sovereignty.
Speaker: Jigar Halani
How can speed at scale be achieved in AI infrastructure deployment while maintaining sustainability?
Balancing rapid capacity growth with energy efficiency and carbon‑footprint considerations is essential for long‑term viability.
Speaker: Peter Panfil; Sanjay Kumar Sainani

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Global Perspectives on Openness and Trust in AI

Global Perspectives on Openness and Trust in AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel co-hosted by the AI Now Institute and AAPTI examined how “openness” is being framed in AI governance, arguing that the term now carries political and democratic weight beyond its technical meaning [1][2][12-16]. Alondra Nelson explained that U.S. policy often treats openness as a binary state, whereas a broader socio-technical view sees it as a gradient that shifts power, ensures accountability and supports community-driven use of AI [27-33][40-42][45-49]. She noted that the current administration is steering AI through industrial, trade and immigration levers rather than traditional regulation, creating a heavy-handed but anti-democratic approach that bypasses public rulemaking [53-58][61-66][67-68]. Anne Bouverot highlighted how open-source models have become a geopolitical lever for challengers like China and argued that middle-power coalitions can use openness to build competitive, collaborative ecosystems [75-89][90-92]. Astha Kapoor warned that for Global South countries, openness framed solely as adoption risks deepening dependence and that agency and control over the full AI stack are essential [111-119][124-126]. Ravneet Kaur described anti-competitive practices such as self-preferencing, tying and exclusive agreements, stressing the need for transparency, data and compute access, and for contestable markets to protect sovereignty [128-138][144-152]. She further argued that competition can safeguard against entry barriers and ensure accountability throughout the AI lifecycle [158-166][168-172]. Karen Hao illustrated broader openness with two projects: a “big-science” open-source LLM that shares data governance with cultural institutions, and the Tahiku Media speech-recognition model built through community consent and participatory design [179-202]. She reframed “scale” as a distributed, community-driven process rather than a monopoly-driven rollout [207-216]. Across the discussion, participants emphasized the importance of genuine community involvement and democratic input, critiquing corporate “inclusion” language that masks closed platforms [232-236][254-258]. Concerns about labor exploitation in data collection and calls for third-party labeling of AI products underscored the need for ethical supply-chain oversight [277-283][381-382]. The panel concluded that achieving a future AI ecosystem requires a broader, democratic, multi-stakeholder conception of openness that integrates competition policy, transparent governance and inclusive community practices [260-262].


Keypoints

Major discussion points


Re-defining “openness” in AI – The panel argued that “open” is used as a proxy for broader democratic values, not just the sharing of model weights or code. Alondra emphasized moving from a binary view of openness to a socio-technical gradient that shifts power, ensures accountability and lets communities modify technology for their own needs [33-44][46-49]. Anne highlighted how open-source can be a strategic lever for middle-power countries while noting its limits [82-89]. Karen illustrated concrete projects (the open-source LLM consortium and the Tahiku Media Māori speech-recognition model) that embed community consent, data-governance and value-sharing into openness [179-202].


U.S. AI governance: heavy-handed industrial policy vs. democratic rule-making – Although the Biden administration is portrayed as “pro-open-source,” Alondra pointed out that AI policy is being steered through tariffs, export controls, immigration fees and public-funded research rather than transparent rule-making, which reduces public input and makes the approach “anti-democratic” [55-68]. Amba’s follow-up note about the shift from traditional regulation to trade and immigration levers reinforced this critique [50-52].


Middle-power coalitions and the Global South’s dependence – Anne described a new organizing principle where “middle powers” (e.g., Canada, France, India, Japan) form ad-hoc coalitions to leverage open-source as a competitive tool [89-92]. Astha warned that for Global-South nations openness can become a “dangerous frame” that masks structural needs and risks turning them into cheap test-beds for external AI firms [111-126]. Ravneet added that competition policy can help protect sovereignty by preventing ecosystem lock-in and ensuring access to data, compute and skills [128-158][161-170].


Competition as a governance lever for AI sovereignty – The Competition Commission of India identified anti-competitive practices (self-preferencing, tying, exclusive agreements) across digital markets and warned that similar risks exist in AI value chains, potentially leading to concentration, price discrimination and opaque systems [134-152]. Ravneet argued that robust competition guarantees contestable markets, transparency of deployment and governance, and is essential for safeguarding national autonomy [161-170][166-172].


Inclusion, gender balance and community participation – The panel highlighted that this was the only all-female panel at the summit, underscoring the need for greater gender representation [4-5]. Alondra reflected on the rarity of genuine community involvement in AI conferences and called for more inclusive formats [219-236]. Karen critiqued “corporate speak” that co-opts inclusion language while locking platforms, urging deeper community agency [254-258]. Audience questions further probed who is truly included in the “all-inclusive” vision and how to empower individuals against pervasive AI adoption [292-306][272-276].


Overall purpose / goal of the discussion


The conversation was convened to interrogate the political-economy of AI by expanding the notion of openness beyond technical artifacts, scrutinizing how current governance (especially in the U.S.) shapes power relations, and exploring how middle-power and Global-South actors can use competition and collaborative coalitions to retain sovereignty and promote democratic accountability. Throughout, the panel sought concrete pathways-through policy, competition law, community-driven projects, and inclusive representation-to align AI development with the public interest.


Tone of the discussion


Opening segment (0:00-12:00) – Formal and optimistic, with moderators framing the panel as “formidable” and emphasizing shared values of openness and public interest.


Mid-section (12:00-28:00) – Becomes more critical and analytical: Alondra critiques the binary view of openness and the anti-democratic nature of U.S. policy; Anne and Astha expose geopolitical tensions and the risks of “open” being used as a veneer for dependence.


Later segment (28:00-41:00) – Shifts toward constructive solutions and hopeful reflections, highlighting community-driven projects, competition as a sovereignty tool, and the promise of coalition-building.


Q&A (41:00-57:00) – Tone turns pragmatic and urgent, with audience members raising concerns about gender inclusion, labor exploitation, and “open-washing,” while panelists respond with calls for transparency, third-party labeling, and deeper community agency.


Overall, the tone moves from introductory enthusiasm to critical examination, then to solution-oriented optimism, ending on a note of cautious hopefulness about shaping an inclusive, democratic AI future.


Speakers

Amba Kak – moderator of the panel; affiliated with AI Now Institute / AAPTI (moderator)


Alondra Nelson – former Deputy Director, White House Office of Science and Technology Policy (U.S. administration) [S20]


Anne Bouverot – France’s Special Envoy for the AI Action Summit; former Director General of GSMA [S23]


Astha Kapoor – policy researcher focusing on data stewardship and governance [S25]


Ravneet Kaur – Chairperson, Competition Commission of India [S14]


Karen Hao – journalist and author of Empire of AI (journalist)


Audience member 1 – Founder, Corral Inc [S1]


Audience member 2 – participant from Germany (member of a German delegation)


Audience member 3 – student [S13]


Audience member 4 – intellectual property and business lawyer [S10]


Audience member 5 – (no specific role mentioned)


Audience member 6 – participant (role not specified) [S4]


Additional speakers:


Amlan Mohanty – partner in conceptualizing and helping to bring the panel to light (partner)


Sanjana Mishra – member of the summit organizing team (organizer)


Iksho Virat – member of the summit organizing team (organizer)


Full session reportComprehensive analysis and detailed insights

Amba Kak opened the session by noting that the AI Now Institute and the AAPTI Institute were co-hosting a panel at the close of a “stimulating” week, united by a shared focus on the political economy of AI and the conviction that technology questions are fundamentally questions of power [1-3]. She highlighted that the discussion was the only all-female panel at the symposium, acknowledging both the symbolic importance of this fact and the need for future iterations to move beyond tokenism [4-5]. After thanking the organising team and wishing participants a good night’s sleep, she took her seat as moderator and signalled the start of the conversation [6-12].


The moderator framed “openness” as a term that does far more work than its technical definition, acting as a stand-in for values such as democratisation, participation, agency and even sovereignty [12-16]. She announced that the panel would broaden the understanding of openness beyond open-source code or model weights, preparing to explore its political and democratic dimensions [17-18].


Alondra Nelson, deputy director of the White House Office of Science and Technology Policy, argued that the Biden administration treats openness as a binary – a condition that is either achieved or not – whereas the open-source movement historically understood openness as a gradient that shifts power, creates accountability and enables communities to adapt technology for their own purposes [27-33][34-44][45-49]. She illustrated the gap between “open weights” and full openness with the examples of Llama 2 and Llama 3, noting that publishing model weights alone does not guarantee the ability to modify or govern the underlying stack [70-72]. Nelson also warned that U.S. AI infrastructure is often built behind non-disclosure agreements, with contracts allowing data-centres and cloud layers to be erected “in the dark of night,” leaving local communities unaware of the installations [73-75]. She critiqued the current reliance on “hyper-regulatory” industrial-policy tools-tariffs, export controls, trade policy and costly immigration visas-that bypass formal rule-making, public notice-and-comment procedures and therefore reduce democratic input [55-58][61-68].


Amba followed up, underscoring that U.S. policy now operates less through conventional regulation and more through trade, immigration and other levers that are relatively insulated from public accountability [50-52].


Anne Bouverot, France’s special envoy for the AI Action Summit, placed the discussion in a geopolitical context. She recalled the U.S. “Stargate” announcement at the Munich Security Conference and Vice-President Vance’s call for global customers, noting that China’s rapid uptake of open-source AI (e.g., DeepSeek) gave it a foothold in the international AI race [75-82]. While recognising the risks of open-source technology, she argued that it remains a powerful lever for “middle-power” coalitions-ad-hoc groups of the willing such as Canada, France, Germany, Switzerland, India, Japan and Australia-to develop competitive, collaborative AI ecosystems and to share resources without relying on a single dominant stack [83-86]. She emphasized that the state still has a role in shaping the coalitions and public-funded infrastructure that underpin them [93-95][98-104].


Astha Kapoor, representing the Global South, warned that framing openness merely as a driver of adoption can trap countries in a dependence on external AI providers, diverting resources from structural challenges in health, education and broader development [111-119]. She stressed that without control over the full AI stack, Global-South populations risk becoming unpaid labour for testing and fine-tuning models, and that openness must be coupled with agency and value-distribution rather than serving as a veneer for exploitation [124-126]. In her closing remarks she cited the Amul cooperative as a concrete illustration of “one member, one vote” governance that could be applied to AI projects, demonstrating how collective decision-making can align technology with community interests [115-118].


Ravneet Kaur, Chair of the Competition Commission of India, outlined anti-competitive practices she has observed across digital markets-self-preferencing, tying, bundling, exclusive agreements, and parity arrangements-and warned that similar risks are emerging in the AI value chain, potentially leading to ecosystem lock-in, price discrimination and opaque systems [128-138][144-152]. She argued that competition policy is essential for digital sovereignty: it must keep markets contestable, ensure transparent access to data, compute and skill-sets, and embed three pillars of AI-lifecycle transparency-technical transparency, governance transparency, and self-audit mechanisms-throughout development and deployment [158-166][168-172][173-176]. When asked whether competition could serve as a lever in the AI sovereignty toolkit, she affirmed that without contestable markets, dominant players could foreclose competition and lock consumers into particular platforms; robust competition safeguards are therefore at the heart of preserving national autonomy in AI deployment [161-166][167-172].


Karen Hao illustrated concrete ways to operationalise a broader notion of openness. She described a “big-science” open-source LLM consortium that involved over a thousand researchers from 70 countries and 250 institutions, partnering with cultural organisations to share data governance, curate datasets and return value to contributors [179-182][183-202][184-186]. She also recounted the Tahiku Media Māori speech-recognition project, which built an AI model through community consent, public education, and participatory design, using the Mozilla DeepSpeech open-source stack and ensuring that the community retained a vote over how the model would be used [184-202]. Hao defined “open-washing” as corporate speak that uses inclusion language to sell closed platforms, warning that this rhetoric masks monopolistic practices [254-258]. She argued that true “scale” should not be understood as a monopoly-driven distribution to everyone, but rather as a distributed ecosystem where many communities develop and deploy models tailored to their own contexts [207-216]. In response to a question about labour impacts, she emphasized that exploitation occurs throughout the data-annotation pipeline and that a fundamental redesign of the AI supply chain is required [381-382].


Alondra’s follow-up reflections reinforced the need for community-centric inclusion, noting again the opacity of data-centre contracts and urging greater transparency. Astha reiterated the value of cooperative governance models such as the Amul example to avoid dependence on external providers. Ravneet highlighted the importance of digital public infrastructure, small-language models and trust-by-transparency as mechanisms for sovereign AI development.


Audience Q&A


1. Individual agency & exploitation – An audience member asked how individuals could make informed choices given the exploitation embedded in many AI tools. Karen proposed third-party labeling schemes-similar to those in fashion or food-that could disclose values, resource use and openness levels, thereby restoring agency [277-282].


2. Gender & Chinese representation – A participant noted the rarity of an all-female panel and the low Chinese presence in AI governance discussions. Amba acknowledged the observation (“Thank you, that was many important and provocative questions you just asked”) but no panelist offered a substantive answer beyond the acknowledgment [4-5][295-298][308-309][81-86].


3. Chinese open-source models – The audience asked whether Chinese open-source models, which may embed CCP perspectives, could be leveraged. Alondra responded that such models can be tuned to remove bias, though the process requires careful oversight [87-89].


4. IP & competition – A question about intellectual-property concerns prompted Ravneet to explain that the Competition Commission intervenes only when anti-competitive abuse is identified, aiming to protect innovation while safeguarding consumer welfare [314-327].


5. Labour & data-protection-by-design – The audience raised a two-part question about protecting publicly available data and its relation to openness. Karen affirmed that “protection-by-design” is a promising research direction but must be balanced against openness; she reiterated that labour exploitation is systemic across the data-annotation pipeline and calls for a fundamental redesign [381-382].


6. Open-washing & competition tools – A final query asked whether competition authorities need new analytical tools to detect “open-washing.” Ravneet described the Commission’s case-by-case rigorous analysis of public data to assess competition harm and to identify anti-competitive practices hidden behind open-washing rhetoric [254-258].


In closing, Amba thanked the panelists, emphasised the importance of having enforcement agencies at future AI forums to ensure accountability, and invited the audience to pose further questions [260-262][253-254]. Karen offered a final reflection that genuine empowerment through AI will not come from corporate-crafted “inclusion” narratives but from a collective, problem-oriented re-thinking of whether AI is the right solution and how it can be designed from the ground up to truly benefit communities [254-259]. The session ended on a note of cautious optimism, urging participants to build the inclusive, democratic AI futures they envision [261-262].


Session transcriptComplete transcript of the session
Amba Kak

The AI Now Institute and the AAPTI Institute, we are honored and delighted to be co -hosting this panel at the close of what has been an extremely stimulating, some would say over -stimulating week. What brings AAPTI and AI Now together, despite the many kinds of distance between New York and Bangalore, is our focus on the political economy of AI and our insistence that questions of technology are always questions of power. So we have a formidable panel by every standard, leaders in their field advocating for AI in the public interest, traversing several fields of government service, academia, and journalism, sometimes in the same person, as you will know if you read their bios, which I’m going to skip for reasons of expediency, but I’m going to talk through some of their specific advantages in the conversation.

You know, it always pains me a little bit to even bring it up, but I’m going to do it anyway, which is it is exceptional that this is also the only female -only panel at this symposium. Hopefully that’s not something we have to say a lot or something that we have to wear as a badge of honor, but more something to work on for future iterations. So before we begin, I don’t think he’s in the room, but I want to also thank Amlan Mohanty, who’s been a partner in conceptualizing and helping to bring this panel to light, and to our wonderful summit organizing team, Sanjana Mishra and Iksho Virat, for their tireless efforts. I hope you all get good sleep tonight after a very long week.

Okay, so let’s get into it. I’m going to moderate this panel, so I’ll take a seat. Thank you. So let’s get into it. Okay, let’s get into it. There have been many discussions about openness at this summit. You’ve probably been in at least one of them. For the most part, these discussions have focused on the kind of technical affordances of open source, open -weighted models, open hardware. But what’s clear is that the word open is doing a lot of work in these conversations. It’s a stand -in for many much broader values of democratization, of participation, agency, even sovereignty. So in today’s panel, we’re going to kind of widen our understanding of what openness could mean in this conversation about AI.

And I’m going to start with Alondra. Alondra has been the deputy director of the White House Office of… of science and technology under President Biden. And at the time, there was a very heated debate about the geopolitical but also safety implications of open source and what U .S. government policy would be on these issues. And it seems like under this current administration, we’ve landed on a pro -open source overall orientation. But at the same time, it feels as if in many senses, AI governance in the United States is more closed than it has ever been. So I guess I wanted to ask, what do you see as the broader challenges to openness in AI governance today?

Alondra Nelson

Thank you for organizing this, colleagues. And good to be here and good to close out this exciting summit with you all. So a couple of things. I mean, I would say the Biden administration, I think, took the questions. Question of open weight model. as a gradient, right? So it was a spectrum. So that open was not a binary. It’s either open or not open. And I think the new administration, the current administration, takes it much more as a binary, that open is a thing that you sort of have achieved and it is now open as opposed to being closed. I think the difference is that, to your point from the opening, Amba, is that I think part of what we were trying to do in the Biden administration was really go back to a kind of foundational sense of openness that comes out of an open source movement that really thinks about openness as a kind of socio -technical characteristic and not just a technical characteristic.

So certainly the questions around open models, AI models, are often around technical things like model weights. Are the model weights shared? Only the model weights shared? Is it also the case that the training data is shared? You know, is the API, open to a certain extent or closed to a certain extent. So the technical things are certainly there. But I think if we go back to a sort of broader understanding of openness that comes out of sort of open source software, it was about shifting power. It was about forms of accountability. It was about sort of openness as a kind of practice and openness as shared infrastructure, openness as resources that could be used by lots of different communities, things that could be, you could modify the technology, that you could sort of just use the technology for the sort of purposes of your community or the purposes that you had.

And so that meant that that older, I think, broader definition of open was much more about democracy and transparency and accountability in a way that if you take even, you know, a so -called open source model like Lama 2 or Lama 3, which isn’t really open source and that we’re… We’re being asked to be content with… model weights as open. So I think the, you know, why we want to really push back on that is because, you know, that we are often, I think, using geopolitical stakes as a justification for not doing the socio part of the socio -technical, for not doing the accountability and the transparency and the democracy part because, you know, too dangerous because in the UNESCO context, China, you know, these things just sort of sit in as signs for explanations for, you know, why things can’t be different.

And I think it’s the case that to go, you know, to be reminded of a kind of broader sense of open reminds us that, you know, it’s not this binary and that one can have, you know, there obviously may be places where you don’t want open source. Like, do you want open source, like nuclear deploy AI? Like, probably not, right? But the binary… The debate gets carried forward as if, like, every open source use or open weight use is that. use as opposed to the sort of gradient of uses that are much safer and moreover are beneficial to communities, to helping people achieve their goals and sort of certainly much better for public transparency and accountability about what these systems do in the world.

Amba Kak

Can I ask a quick follow -up and then I want to move to Anne, which is that the other sort of defining feature of certainly of U .S. government policy today is that it’s happening less through traditional, you know, the traditional forms of regulation that we’re used to and much more through industrial policy, through trade policy, through immigration. But these are also spheres that have been, I would say, relatively even more immunized from public accountability or harder to, you know, harder for the broader public to weigh in on. So just wanted your thoughts on how we…

Alondra Nelson

Yes, I’ve been writing and thinking about this. Thank you for that question. So… So, you know, we’ve spoken a lot about the new administration and gets talked about as being deregulatory in regards to AI and being very light and being, quote, unquote, light touch. And I think if we actually pose that as a question as opposed to accepting it as a statement and actually look at what the current administration in the U .S. is doing around AI, it’s actually taking a quite very heavy hand to sort of steer AI. So you mentioned some of the levers that they’re using, tariffs, trade policy, export controls of semiconductor trips, in the U .S. context even immigration. So, you know, there are, you know, I think companies are getting out of it and around it depending on their relationship to Washington, but we’re told that an H -1B visa for a high -tech worker is $100 ,000 per worker, right?

And so that’s, you know, 10x, 20x or whatever times a company, that’s quite a lot of money. And also just… The way that science is being funded to the extent that, you know, the federal government plays a large role in driving the sort of research ecosystem for technology. So all of those things are being very heavily shaped in the current administration in the U .S. And so… So it may not be regulatory in the sense of formal rulemaking as it happens in the United States context, but it is certainly hyper -regulatory, I think, in a lot of other ways. And I’ll go back to my keyword of the day, the democracy piece, which is the upside of formal rulemaking, even though it can be clunky, it can take a long time, sometimes the pace is too slow for the pace of the technology, all of those things can be true, is that it has democratic input.

So if you’re doing a rulemaking in the context of the U .S. federal government, there will be a public call, there will be a public notice that you’re doing the rulemaking, there will be a public call for input. So even if you don’t agree with the outcome, there are sort of moments of sort of democratic input. When we are doing AI policy by fiat and through executive authority only, those inputs, even if those limited inputs are even gone. So it’s not only, I think, quite heavy -handed. It’s unfortunately, I think, anti -democratic relative to the status quo.

Amba Kak

Yeah, exactly. Anne, I want to move to you. As the French president’s special envoy for the AI Action Summit, you’ve been at the heart of a lot of global coordination on AI governance. And there was a time, I would say, the last 10 years have been characterized by open versus closed as a kind of binary or a way of organizing the world into particular camps when it comes to AI, the democratic open world and the rest of the world. But it’s interesting how much that has, you know, the ground beneath us has shifted in the last few years. And it has been particularly interesting to note at this summit that it is middle powers as a frame that is coming through as a kind of new organizing principle.

So I guess I want to say, I mean, do you see that openness still has value in forging multilateral, solidarities and especially in this brave new world we’re in?

Anne Bouverot

Yes, absolutely. I mean, clearly the geopolitical landscape has really shifted. At the AI Action Summit in Paris, it was exactly a year ago in February. It was just after the inauguration in the U .S. It was the first international trip for Vice President Vance, and what a speech that was, just before Munich, the Munich Security Conference. It was a moment where the U .S. announced at the White House the Stargate project. So it was a very strong and loud message from the U .S. saying, we’re here, we’re investing, we’re the world leaders. And at the summit, J .D. Vance said very clearly, we want all of you to be customers of our technology. And at the same time, this is the moment when DeepSeek emerged on the world map and everybody realized that actually China, using open source, which is why I want to come to that, was really saying we have a seat at the table and we’re actually playing that game.

And China using open source is actually very interesting because open source has a number of benefits and also risks. I don’t think it’s the answer to everything, but clearly it’s a way for challengers to catch up. This is how Android came to the world of smartphones. There’s many examples, and this is what China has taken as a lever. To be in that race. But then on to what does it mean for other countries than the U .S. and China. It also means that this is a tool that can be used by other countries. which is why in France and in Europe we’re very much in favor of open source as a competitive tool and as a way to leverage the knowledge and the findings of others to then just stand on their shoulders and continue to develop technology.

It doesn’t mean that everything should be open source there are cases where you do want to be careful depending on the use case but as a way to develop and stimulate competition it is very powerful it’s not the only tool you mentioned middle economies middle powers there was this fantastic speech by Mark Carney at Davos and there was a speech by Macron as well that maybe I’ll conclude with but this idea that middle economies that have some resources, not the resources to build their own stack top to bottom and to fund frontier level AI but But together, by building coalitions of the willing, these middle economies can do a lot of things. I believe that Canada, France, Germany, Switzerland, India, Japan, Australia, I can name a few of them.

And it doesn’t have to be one big block of these middle powers, but ad hoc coalitions of the willing. So I believe this is really something that can be useful in the evolution of governance.

Amba Kak

That was a fascinating account, and I think what it also highlights is that actually, whether you’re China or the U .S. or the middle powers or France, there’s a level at which everyone, as we discussed, can in some limited way be pro -open source. So do you think then that the differentiation will be at the layer of governance and our approaches to how we govern? How do we govern these technologies?

Anne Bouverot

I don’t know, is really the answer. Governance is such a broad word. There’s a lot of, for example, open source is really being taken as a tool by startups and scale -ups in Europe and in other countries. I mean, by Mistral, by Cohere, by Sakana AI in Japan, by a number. Is that governance? I don’t know. But clearly, governance and countries and institutions have a role to play in saying, how do we shape those coalitions of the willing? How do we put public funding or access to publicly funded compute or access to data sets that countries can help to put together? How do we put that at use and in which ways? So what are the governance?

How do we put that at use and in which ways? How do we put that at use and in which ways? How do we put that at use and in which ways? that we use to strengthen digital sovereignty and resilience.

Amba Kak

Precisely, yeah, that’s sort of what I was getting at. Okay, Aas, I’ll quickly move to you. Middle powers, as we just discussed, it’s a very broad term, and what it conceals is that there are many different economic and political aspirations of the countries that are bundled in that mix, and especially for countries like India or other countries in the global south, what are the unique kind of forms of both leverage and dependence in this current environment?

Astha Kapoor

Yeah, thanks so much, Amba. I mean, I think that what we’ve been tussling with over the last few days is that we went from global south to middle powers very quickly in a matter of days, which changes our form a little bit and our aspirations, and I think that that is what we have to grapple with, which is that as global south, our needs are very different in terms of we have structure. We have structural issues around health, around education that need to be addressed. We also have, you know, things that we need to do in terms of moving the country forward beyond what is just technologically mediated progress. And I think that what we’ve been hearing around over the last five days is that things like, well, open data or multilingual data sets is what is going to be that push.

So, you know, our languages will now be online. But then at the same time, we also have to realize that without having openness or control or agency or frictions across that entire AI stack, we are basically risking our populations in the Global South doing the labor to bring people online. So openness as a driver of adoption is actually quite a dangerous frame for Global South countries because it moves attention from where we might need to invest our resources. to then thinking that the only way to our historical problems is via adoption. And we’ve also seen that in the absence of governance, India is not new to the openness discourse, right? We have had a history over the last 12 years or 15 years on digital public infrastructure, but we’ve also seen the limits of once adoption occurs and when you have innovation, people with the deepest pockets come to innovate there because this is an enormous market.

So I think that you mentioned, Karni, like if we are a middle power, we’re definitely on the menu as a market. If we are a global south country, I think that there’s value in thinking about what that solidarity is because you’re right, there’s no homogeneity. And I think we’ve missed some of those questions around what we as large markets diversify. We are not here. We’re not here to do the labor to, you know, test bed models that are built elsewhere. So I think openness as dialogue, as distribution of value is what we need to think about.

Amba Kak

so many soundbites that I want to clip out of what you just said that was incredible, thank you Chairperson Kaur, firstly thank you so much for being here, I think what Asha said actually leads in well to the question I wanted to ask you which is how does one combat this dependence and as the Chair of the Competition Commission of India you’re a regulator that has been kind of ahead of the curve of looking at anti -competitive trends in this market, so from your perspective can you say a little bit both about the key implications of competition in the AI market and also if you see competition as a lever in the so called sovereignty toolkit

Ravneet Kaur

Thank you Amba, so for us at the Competition Commission of India, we’ve been looking at a lot of developments happening in the internet economy and these developments have changed the way businesses work how consumers interact with the markets and how value is being is being created. So things are moving very rapidly on the digital front. And as the commission, we have looked at what can be the practices which can be anti -competitive. Apart from the benefits which are coming from a digital economy, we have numerous benefits when it comes to economies of scale, the network effects, the efficiencies which are coming from that. But then there are also these risks which are there. And some of these have already been observed by the commission.

So the key ones which we found in the case of digital markets is the self -preferencing which is happening. Tying and bundling is occurring in numerous cases. Leveraging is being done. And there are these exclusive agreements where unfair terms are being also sought and, you know, parity agreements, parity arrangements. are being put in place. So in the competition commission, we have looked at this conduct when it comes to search engines. We’ve looked at it mobile ecosystems, online intermediation services, whether it is hotel, bookings, food ordering, e -commerce, or it is social media platforms. So across the entire spectrum, the commission has been looking at it. And very interestingly, when we started looking at AI, what could be the impact of AI?

So we did a market study on AI and competition, and the report has been released recently, October 25. It’s available on our website. And we found a lot of similarities in the way AI can function as well. So AI can bring a lot of benefits. We are seeing a lot of benefits when it comes to healthcare, education, logistics, supply chain management, and a lot of agriculture. And I’m seeing a lot of good things happening on that front. But also there are these potential possibilities or risks where you could see concentration in the entire AI value chain. There could be ecosystem lock -in, which might happen. Then there could be targeted price discrimination of people based on location, economic means, et cetera.

And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. And as a first step, we thought we need to make everybody aware because the important issue is one of access. Who has the access? That is a person who will determine what will happen in future. So it is access to data. It’s access to compute infrastructure. It is access to even skill sets. So whether we are able to build up the required skill sets within the country to be able to compete effectively. so those issues have brought us to work towards a framework where we are saying in the entire life cycle of the AI system how can we bring in transparency how can we bring in accountability

Amba Kak

I think that’s so important too because we focus a lot on big tech control over infrastructure people are familiar, inputs but I think what you’re pointing to is that it’s access to the consumer the pathways to monetization are happening at the distribution layer so really paying close attention to making sure that we have free and open competition in that layer and firms can’t take dominance from one market into another seems really important my second maybe more provocative question was around do you see competition as a tool for particularly global majority countries to retain and exercise sovereignty in the kind of AI age

Ravneet Kaur

when we look at AI we are looking at how far we can develop and how much we can do to make sure that we are able to make the most of the market and how much we can do to make sure that we are able to and deploy, monitor our AI systems that we are putting in place. And that’s where the issue comes up that we need to have the autonomy to be able to deploy the systems as per our economic, strategic, and societal priorities. And that’s where we see the very critical thing that how we can ensure that AI does that. And competition is a very important aspect of it. We just can’t forget about it because competition is what is going to ensure that there are no entry barriers, that players who are already there are not using their dominance to foreclose competition, to foreclose the market, and also that the consumers are not left locked in into a particular system because they can’t move their data and their various benefits that they are deriving from the AI systems to some other applications.

So really competition is at the heart of it, and I don’t see any way where we can forget about market. Thank you. markets would need to be contestable, fair, competitive. And for that, you know, that is where I would like to point out about our study, that we have clearly brought out that people who are deploying the technology, they have to have technical transparency. The stakeholders have to be able to understand what’s happening, what is this technology or this application being used for. And then there has to be governance transparency. That is that how you are governing that system. That also needs to be transparent. So once we are able to ensure that the people who are deploying these systems are looking at all these aspects, then the self -audit is happening, then maybe we would be able to safeguard competition because at the really crux of it all is maintaining competition.

competition.

Amba Kak

Thank you so much. Karen, I’m going to move to you. And just by the fact that there was so many, a line of people trying to take a selfie with you before we started, I’m going to assume that many people in the audience are familiar with Karen’s incredible book, Empire of AI. Her work has really delved into the global inequities that are embedded in the AI sort of global supply chain. I want to ask you where, I mean, your book is full of rich examples, but where do you see that open approaches to developing AI in some ways pose a challenge to this empire model of AI?

Karen Hao

big science project. It was this project that brought together over a thousand researchers from 70 countries, 250 institutions to try and create an open source large language model that not only would allow many different researchers to then interrogate what is actually happening beneath the surface of a large language model, but also to completely rethink what it would take to develop these technologies in a fundamentally more beneficial way where, for example, there’s better data governance practices, where you’re actually curating and cleaning the data, making it transparent for people, being able to track which data owners are then contributing to what aspect of value generation within the model. And this kind of goes back to Alonzo’s point as well, where you were saying…

that we really need to understand openness with a much broader conception of what openness means. It’s not just technical openness. And this project really embodied that, where they were working together with lots of different cultural institutions, with libraries, historical institutions, to try and figure out better ways of capturing the rich data that they had, but with respect to that institution and with a way to then deliver value back to that institution so the value chain wasn’t going just to the model creators themselves. Another project that I really loved is one that I highlighted in my book in the epilogue, which is the Tahiku Media AI speech recognition model. So Tahiku Media, they are a nonprofit radio station in New Zealand, and they broadcast in Te Reo Maori, or the Maori language, the language of the indigenous peoples in New Zealand.

And when a couple years ago, there’s been this big movement within New Zealand to try and revitalize the Maori language because it has been a huge challenge for them. almost been lost through the process of colonization. And Tahiku Media thought they had a very unique opportunity with this rich archival audio of Tōrero Māori to open this up to the community and help facilitate more language learning. They wanted to make it more accessible than simply just allowing people to listen to it, though. They wanted to create an application where you listen to the audio while you see a transcription of the audio. You can click on the transcription to get automatic translation. You can figure out how the language actually works.

But they realized they didn’t have enough capability to transcribe this because there simply were not enough proficient Tōrero Māori speakers. So this was the perfect use case where they could leverage building an AI speech recognition tool to do that work for them. But they went about this project in a totally different way. They made it extremely open and participatory to the community. Also not in a technical way, but in a social way. where they engaged immediately with the community to ask them, do you want this AI tool? And once the community said yes, they then had a public education campaign where they taught everyone what is AI in the first place, what do we actually need, we need a model, we need data, this is the kind of data that we need, this is the data that we would need from you.

And then once they actually engaged in that process and they developed so much trust with the community, they were able to collect enough data from the community with full consent in just a few days to train a speech recognition model. And then they continued to go back to the community and they said, now that we have this model, what kinds of applications do you actually want us to develop with this? What kinds of new AI models do you want to develop with this? And all of this was built on another open source project, which was the Mozilla Foundation’s deep speech model, which was similarly developed. With that kind of broader definition of openness, it was a model developed purely with also consentful data donations.

And so the entire stack was with the spirit of collaboration, with participation from everyone in the community, with an equal exchange of value where the people who are giving the data have a vote, have a say in then how the model ultimately can help support their journey in language learning. So both of those examples I always hold in my head when I’m thinking of what are the visions of AI that we actually want to support, what are the visions of open space AI that we actually want to support.

Amba Kak

So as you were speaking, I was just thinking, apart from being open and participatory in all the ways you said, these examples also provide a contrast to the idea that there is one model to rule them all, there’s this very sort of large language, we’re taking a single bet on a single technology, type of approach. But similarly, one of the… of the, I guess, common retorts to these experiments in some sense is that we can’t do that at scale. And so I’m just curious, what do you see as the tension between these kinds of governance structures and scale, and is there a trade -off?

Karen Hao

So I would reframe what we mean by scale, because what we are taught by Silicon Valley is that scale means they distribute to everyone, but they are the sole distributor. And to me, that’s not scale. That’s a monopoly. And what really we would want from scale is different communities all around the world, different industries, different companies, each developing models by and for them at scale. Like, that’s, to me, like a much more appropriate way of thinking about scale. And in fact, what’s so interesting is, like, because of the data imperative for large language models and the compute imperative for large language models as they’re currently being trained by the main company, they’re not going to be able to do that.

There is not a, there isn’t a good ability to diffuse this technology across. many different industries or many different communities. Most industries are data -poor industries. They’re not like the Internet industries. They don’t sit on vast amounts of data. And so if we actually want to diffuse AI to more people around the world and for more use cases around the world, in fact, we need to think of scale from a small AI perspective, a community -driven perspective, application -specific perspective, and that’s how we’re going to get scale.

Amba Kak

Okay, we’ve heard, I guess, a range of rich perspectives, and I’m going to take it as a good sign that all our panelists seem to be actively taking notes and sort of engaging with what each other was saying. So I was going to propose as a sort of round two that I might ask, just based on the conversation we’ve just had, Alondra, what is something that’s sort of sticking with you or that you’re working through in response?

Alondra Nelson

Yeah, I think community. So Karen queued that up for me, and the note that I was just writing here was about that, and I was thinking about… is how the stack that we are building now is explicitly closed to community. And I was thinking in particular about the data center and cloud layer. So in the U .S. context, there’s a lot of contestation. There’s growing contestation in communities about data centers. What folks might not know is that part of the contestation is because elected officials are asked to sign NDAs and contracts are being signed to stand up data centers in the dark of night and communities don’t even know. So the sort of lack of openness around the infrastructure, that infrastructural piece of the AI stack is actually quite profound.

And then I was thinking the opposite. So my reflection on the time here, which I’m still going to be processing for quite a long time. It’s my first time in New Delhi, my first time in India. It’s been an incredible experience. But I’ve been to a lot of AI conferences like, you know, NeurIPS. and everything, you know, like professional ones, not professional ones. A lot. A lot. This is the first one I’ve ever been to that has included the community in any considerable way. And it just is, I mean, I think it’s a revolutionary thing. And if we’re really serious about having democracy and community and voice, AI conferences need to look much more like this one than the ones that we spend a lot of our time going to.

So, you know, so who knows what will be the outcome of this week together. But it has been extraordinary and distinctive in the inclusion of lots of, you know, unks, aunties, college students, and lots in between.

Amba Kak

Aastha, closing reflections.

Astha Kapoor

Yeah. First of all, thank you for that reframe. As somebody who was here on the 16th, I was feeling so overwhelmed, and my instinct was like, there are too many people. But I do appreciate that. That reframe on the fact that this is the community that is going. to build and question and do the work, I think, that we all keep talking about. And I think from that is also, my word is also community, but I think friction, how do we enable some of that, both the coalescing, but also the dialogue, the questions, the where is the value for me part of it. And I know an example that was presented yesterday on the Amul Co -op, we’ve been doing a lot of work with cooperatives, to me, which is a nice space because it is the governance question of one member, one vote, you can pool things.

So how do they become recipients but co -designers in some of the things that we’ve heard over the last few days. So, yeah. Just closing reflections and maybe even just a takeaway that you’re sitting with after this week.

Ravneet Kaur

Yeah, sure. So for me, I think the very important thing, which came out from this AI impact summit is that the governments need to be very active about how they are ensuring. that the deployment of AI is happening. And for that, I am very happy with the way we are going in terms of, you know, we did a great job when it came to digital identity and digital payments. So now we are looking at a digital public infrastructure, how you’re going to be able to provide compute platforms for startups, for people who don’t have the resources, make available data, and then the focus which is there on small language models. Everything doesn’t need to be large, especially when we look at things which are very language -specific, very related to our country and to our solutions.

So that’s one of the key takeaways that I have. And the other, of course, is that we’ll be going, all of us at the Competition Commission are now, you know, going back with this, that one needs to be very alert as to what are the kind of systems which are being put in place and are flexible. Is there transparency? Is there accountability? so those are the key things because at the end of the day it is trust if you can build up trust if your systems are not opaque then you would be able to get the people on board onto your applications and to your systems and that’s where success lies, that’s where value is

Amba Kak

I’ll say ma ‘am that one of my key takeaways and hopefully someone from the Swiss government is listening for next year is that we also need to see much more voices from the enforcers, those that are going to make sure that the players in this space are accountable to the public and not above the law and so I’m very grateful that you’re here and I hope that future summits see more enforcers at the table okay Karen you get the last word and I would say I’m going to open up for questions so start thinking of

Karen Hao

I think my biggest reflection from the summit which I also shared in an event last night is that um um It’s so interesting to observe corporate speak in these spaces. And the thing that struck me the most about this summit is that this corporate speak has gotten very sophisticated in that they have adopted the language of inclusion, diversity, empowering marginalized communities to talk about ultimately selling their technology and making sure that you kind of buy into helping them lock in their closed platforms. And I hope that because we have more community engagement and there’s more openness in a lot of the discussions that are happening kind of alongside this very sophisticated corporate speak that all of you will take away from the summit this broader idea of what it really means to ultimately build a future where AI can empower people.

It does not actually mean the democracy that the companies offer us. It in fact means that we should all be thinking very deeply about. What are the problems that we really need to solve in as individuals within our families, our communities, our companies, our context. and then whether or not AI is even the right solution for that problem and then how to design and develop from the ground up AI solutions that truly are empowering and enabling and help tackle those problems and bring everyone along together.

Amba Kak

That was, yeah, what a great note to end on. And honestly, a note of optimism and a note to build towards the futures we want to see. Okay, so does anyone have any questions? Okay, I saw you first. Go ahead.

Audience member 1

Hi, everyone. And, yeah, I was one of the people in line looking for the signature on the book. So I read Carol. It’s a reference book. And my question is addressed to you. So all of this, it makes sense, but it makes sense in a more macro way. From a micro perspective where an individual is exposed to AI and, you know, at their workplaces and we’re expected to use it and, you know, that there’s no getting away from it. How do we reconcile the fact that, you know, probably there is a whole lot of exploitation behind the models that we’re using? But at the same time, you can’t not use it because it’s just, it’s every day.

I don’t use it. Yeah. Yeah. So I’d like to know a little bit more about that. How? Yeah.

Karen Hao

No, I actually, I think it’s totally possible to not use these tools. But also, I would say that oftentimes our conversations around adopting AI are posed as a binary. Like either you go completely all in. Or you go none at all. Yeah. and there’s actually a million possibilities in between right there are so many different ways that you could refrain from using air in certain contexts but maybe there are other ways that it helps you um being more intentional about what kinds of ai tools you adopt from which kinds of companies like we’ve been talking a lot about openness so maybe you choose to use more open ai technologies rather than the closed ones um one of the things that i feel is missing right now within the ai ecosystem that makes the burden very very high on consumers is that we don’t really have third -party organizations doing analysis to make clear like clear and easy labels for consumers to determine what values and what degree of resources are being used to develop different types of ai models so that they’re they can actually make informed decisions but we have lots of precedent of this happening in other industries like the fashion supply chain and food and coffee and so i hope that someone out there listening will start working on this like develop some kind of third party third party labeling system so that consumers can actually start making more informed choices.

The other thing that I would say is I also don’t think individuals like we aren’t just consumers. That’s not the only way that individuals can push against the inevitability narratives of AI. We’ve seen amazing protests that have broken out all around the world to push against data centers. We’ve seen protests from parents who feel that their children are being harmed and this rapid escalation of AI advancement is getting out of control. We’ve seen artists and writers using the tools of litigation to counter when these companies are infringing on their intellectual property in ways that they don’t stand for. There are many different ways I think within your life AI is everywhere and also that means you as an individual and within your community have a thousand different touch points for how you can interact with the AI supply chain and in each of those touch points you can choose whether to resist or adopt or be neutral and so there’s yeah like I hope that people actually feel significantly more agency than I think people generally feel today.

Amba Kak

Thank you. Okay, I think we should do a couple of questions. So you, you, and you. Okay, let’s go in that order. So we’ll take those three questions and then…

Audience member 2

Hello, thank you so much. This was, I think, my favorite panel of the whole summit. And also, like, an all -female panel. I think it’s nice. It’s also kind of connected to a reflection. You know, my question is, like, I feel like at this space, I’ve realized there’s not as many women by far. As men. And, again, as you said, it’s the only female panel. And we’re here with a group of 15 people from Germany. And, like, half of us is male and half of us is female. Often just our male counterparts get addressed and somebody’s just speaking to them and, you know, not, like, asking them for money or other, like, in terms of, like, pitching their business idea, whatever.

But I’ve also noticed other things. Like, the theme is, right, AI all -inclusive, right? But I’m wondering, like, who does this include? In this specific context? In which vision, like, do you understand, like, from this summit, who you think is included in this vision for all -inclusive? and also I’ve realized, I don’t know if anybody else has realized but I feel like China is quite an important power in the AI governance space but the amount of Chinese people here I’ve seen is very low and it’s just something I realized that I noticed so I feel like it’s still just some reflection and I wonder how you see this, like what does this notion of all inclusive mean for you or how you perceived it here?

Amba Kak

Thank you, that was many important and provocative questions you just asked

Audience member 3

I was curious, kind of as a follow up to our colleague here, your role on the open source Chinese models which are clearly the most intelligent in the open source space but clearly have a deep CCP perspective and so I’m curious like how does that come together in this ecosystem and how can we leverage it appropriately?

Audience member 4

Hello. Thank you panel for the wonderful discussion. I’m an intellectual property and business lawyer. So my question is related to intellectual property, specific to Ravneet. Just I wanted to know how you see the openness of AI in context of the intellectual property as openness is somewhere giving the restriction in context of the intellectual property.

Amba Kak

Why don’t we start with that question?

Ravneet Kaur

Okay, sure. Sure. So when you look at intellectual property, because, you know, there’s a lot of research, development and innovation which has gone into the development of that technology. And whatever is put in place, and there are these copyrights, there are these patent acts which are protecting that. When it comes to the competition commission, we come into the picture only if we find that there is an abuse. Wherever, whatever innovation has been done, it is being used to ensure that there is an abuse. And we want to ensure that no other people can come into the map into the into the same map. And it is being used to enforce conditions which are unfair. So that is the only space where we come in.

Otherwise, the purpose of the commission is not to stifle innovation. We are to, in fact, protect innovation because that’s the way to grow. That’s the way markets will grow further. Competition will increase. New players will keep coming in, better technologies, better value for the customer. So consumer welfare is one of the very critical things we look at. That’s how we address these issues.

Amba Kak

I wonder if, Aastha, you can talk to the gender and that broader question on inclusion.

Astha Kapoor

Yeah. Thank you so much for that question. I think it’s what we’ve all been feeling as well. I think that basis, what I have understood in so very early, overwhelmed sense is that there is inclusion, as Karen was saying, is also being chosen host as a word for adoption. And I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this.

I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. democratization is about market access. The working group also says so. And I think that the gender perspective will also, and we’ve seen this again in previous iterations of the tech will save us, financial inclusion, digital financial inclusion variety, which is like get people online. And then what ends up happening is that when you realize that you’re not able to make money of these, like, you know, the bottom 80%, then you start to get drop -offs there. So it is at the moment of that hype cycle of getting everybody online, and then whether we’re able

Amba Kak

I don’t know, maybe you could take the question on Chinese open source AI and how we feel about it.

Alondra Nelson

I’ll try. I mean, one thing I would say about, there’s been some news reporting on, you know, about the fact that this week took place during the Lunar New Year and that that probably had some impact on participation at Ramadan as well. I mean, you know, so I think, that’s not lost, I shouldn’t be lost on. any of us for this question of inclusion. I think, I mean, I haven’t worked with the Chinese model, so I don’t know, but if they’re open source models, you should be able to tune them so that they don’t have, you know, at least as much kind of, you know, CCP kind of ideological control. I don’t know if you do that in the training data or inference level or where you do it, but, and it seems that they are, there are a lot of companies that are building on the Chinese models, and so it seems like even in the enterprise space, and so that is clearly not a hurdle to some of the enterprise kind of uses and applications that people want to build on them, so.

Amba Kak

I think we can take two more questions. Okay, so your hand, and I just want to take someone from the middle. You can go. Okay. The alarm just went off. So if you could also make sure that it’s a crisp question that would allow there to also be answers. Yeah.

Audience member 5

So I am really interested in how AI is going to impact labor. And one of the biggest concerns in this area is the fact that, you know, AI can train on the intellectual labor of so many people without giving credit, without giving compensation. So there are obviously regulatory approaches to this. But I’m more interested in like an up. So new research that’s happening about protecting publicly available data, be it images, be it websites, be it written content in a way that that data, if it’s used directly by AI, it’s either useless to it or it’s harmful to it. I think there’s some research happening in University of Chicago around that and some other places. So my question here is twofold.

First, is this like a good approach to sort of protect intellectual property or data by creating? Protection by design. And two, how does it tackle? How does it go with the idea of openness? Right. Because on the one hand, it’s.

Amba Kak

Thank you for the question. I just want to make sure we have time for the others. They’re going to kick us out of this room. That’s the final question and then maybe Karen, you can address the labour question.

Audience member 6

Hi, I wanted to ask about open washing. We’ve been hearing the term in previous discussions about openness in competition. And I just wanted to ask in terms of enforcement, how should competition authorities assess whether this openness is genuinely lowering entry barriers or whether underlying dependencies still exist essentially. Do we need new analytical tools? Does there need to be a reworking of the frameworks around competition? That’s essentially the question I wanted to ask. Thank you.

Amba Kak

Karen and then Jayperson Kaur, you will have the last word.

Karen Hao

Sorry, can you remind me the very last part of your question? You were talking about… The labour one. Yes. I agree with everything that you said, basically, that, yes, this is a huge problem. Yeah, like labor exploitation is absolutely happening, both with the exploitation of the labor that is being used to produce the data and also labor exploitation of, like, data workers that are cleaning the data. And I think that just shows, given that the labor exploitation is happening all through the supply chain, that that is kind of inherent in the logic of how these models are being created, and we need to fundamentally rethink that from the ground up.

Ravneet Kaur

So when we do a competition assessment, we are looking at numerous economic factors that are also taken into consideration. It is not based on, you know, what has been submitted to us. And a very detailed analysis is done to understand whether there is any competition harm. And the other aspect which is looked into is what are the effects which are there. Is there an appreciable adverse effect? So we have to establish both the things, and this is done on a case -to -case basis after doing a very rigorous analysis. of both the data which is available in the public domain and the analysis done by our internal teams. Only then we are able to determine whether there’s a harm to competition.

Amba Kak

Okay, thank you all so much for being here. This is such a rich conversation and thank you all for being part of it. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (23)
Factual NotesClaims verified against the Diplo knowledge base (3)
Additional Contexthigh

“The discussion was the only all‑female panel at the symposium, highlighting the symbolic importance and the need to move beyond tokenism.”

The knowledge base notes that diversity and inclusion efforts must go beyond tokenism to create meaningful change, reinforcing the report’s point about moving past tokenism [S89].

Additional Contextmedium

“The moderator described “openness” as a term that does more work than its technical definition, standing in for values such as democratisation, participation, agency and sovereignty.”

Discussion in the knowledge base about openness being taken as a tool for governance and its broader implications for startups and policy provides additional nuance to this framing of openness [S3].

Additional Contextmedium

“Nelson warned that U.S. AI infrastructure is often built behind non‑disclosure agreements, with contracts allowing data‑centres and cloud layers to be erected “in the dark of night,” leaving local communities unaware of the installations.”

The knowledge base emphasizes that trust in AI infrastructure must be built locally through community-level engagement, underscoring the importance of local awareness that the report highlights as lacking [S96] and [S99].

External Sources (99)
S1
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 1- Founder of Corral Inc -Audience member 6- Role/title not mentioned
S2
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S3
Global Perspectives on Openness and Trust in AI — – Karen Hao- Audience member 1- Audience member 5
S5
S6
Global Perspectives on Openness and Trust in AI — -Audience member 2- Part of a group from Germany
S7
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S8
The Arc of Progress in the 21st Century / DAVOS 2025 — – Paula Escobar Chavez: Audience member asking a question (specific role/title not mentioned)
S9
AI Transformation in Practice_ Insights from India’s Consulting Leaders — – Romal Shetty- Sanjeev Krishan- Audience member 3- Audience member 4
S10
Global Perspectives on Openness and Trust in AI — -Audience member 4- Intellectual property and business lawyer
S11
Global Perspectives on Openness and Trust in AI — – Alondra Nelson- Audience member 3
S12
Global Perspectives on Openness and Trust in AI — Speakers:Alondra Nelson, Audience member 3 Speakers:Anne Bouverot, Alondra Nelson, Audience member 3
S13
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 3- Student -Audience member 6- Role/title not mentioned
S14
Global Perspectives on Openness and Trust in AI — -Ravneet Kaur- Chairperson of the Competition Commission of India
S15
Global Perspectives on Openness and Trust in AI — This panel discussion at an AI summit explored the concept of “openness” in artificial intelligence development and gove…
S16
Global Perspectives on Openness and Trust in AI — – Alondra Nelson- Karen Hao- Amba Kak – Ravneet Kaur- Amba Kak
S17
Global Perspectives on Openness and Trust in AI — Speakers:Alondra Nelson, Amba Kak Speakers:Alondra Nelson, Karen Hao, Amba Kak Speakers:Ravneet Kaur, Amba Kak
S18
Digital Technologies and the Environment: a Synergy for the Future — 17. Sengupta, Rajid, 2021. World needs to rethink internet use post-COVID-19 . Retrieved 30 November 2021 from: https://…
S19
Global Perspectives on Openness and Trust in AI — – Alondra Nelson- Karen Hao – Ravneet Kaur- Karen Hao
S20
Global Perspectives on Openness and Trust in AI — Alondra Nelson, former deputy director of the White House Office of Science and Technology Policy, provided the panel’s …
S21
AI Safety at the Global Level Insights from Digital Ministers Of — -Alondra Nelson: Professor who holds the Harold F. Linder Chair and leads science, technology, and social values lab at …
S22
Global Perspectives on Openness and Trust in AI — -Alondra Nelson- Former deputy director of the White House Office of Science and Technology under President Biden
S23
Building Trusted AI at Scale – Keynote Anne Bouverot — -Anne Bouverot: Special Envoy for Artificial Intelligence, France; Diplomat and technologist; Former Director General of…
S24
How to make AI governance fit for purpose? — – Anne Bouverot- Chuen Hong Lew – Jennifer Bachus- Anne Bouverot
S25
Dare to Share: Rebuilding Trust Through Data Stewardship | IGF 2023 Town Hall #91 — Astha Kapoor:Yeah. Thank you for this and thank you for the audience too for coming at this very early hour. I guess to …
S26
Global Perspectives on Openness and Trust in AI — These key comments fundamentally transformed what could have been a technical discussion about open-source AI into a sop…
S27
Global Perspectives on Openness and Trust in AI — – Karen Hao- Audience member 1- Audience member 5
S28
Global Perspectives on Openness and Trust in AI — Speakers:Karen Hao, Audience member 1, Audience member 5
S30
Study highlights inaccuracy of AI chatbots in providing election information — A recentstudyby the AI Democracy Projects, acollaborationbetween Proof News and the Science, Technology and Social Value…
S31
[WebDebate #26 summary] AI on the international agenda – where do we go from here? — Another challenge identified by Nelson is the justification of decisions made by AI – policy makers assume that a decisi…
S32
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — We make systems and making decisions about who receives public services, who qualifies for a loan, or who is flagged for…
S33
https://app.faicon.ai/ai-impact-summit-2026/global-perspectives-on-openness-and-trust-in-ai — Okay, so let’s get into it. I’m going to moderate this panel, so I’ll take a seat. Thank you. So let’s get into it. Okay…
S34
AI governance debated at IGF 2025: Global cooperation meets local needs — At theInternet Governance Forum (IGF) 2025 in Norway, an expert panel convened to examine the growing complexity of arti…
S35
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Summary:While both speakers oppose excessive regulation, Jay focuses on the balance between alignment and innovation, wh…
S36
Main Session 2: The governance of artificial intelligence — Different stakeholders sitting at different parts of the ecosystem bring forth different perspectives Kakkar emphasizes…
S37
Building Trusted AI at Scale – Keynote Anne Bouverot — Namaste. Bonjour. Excellencies, distinguished guests, dear guests. Dear friends. Thank you so much for welcoming me here…
S38
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Competition policy and advocacy play an important role, especially in developing countries, where competition authoritie…
S39
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — The discussion was moderated by Chris Odu and featured panellists including Binty Mansaray (digital security auditor), A…
S40
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Another important point emphasized in the analysis is the significance of involving users and technical experts in the p…
S41
Global Perspectives on Openness and Trust in AI — Alondra Nelson, former deputy director of the White House Office of Science and Technology Policy, provided the panel’s …
S42
Global Perspectives on Openness and Trust in AI — This comment reframes the entire discussion by distinguishing between technical openness (sharing model weights) and tru…
S43
Policy Network on Internet Fragmentation (PNIF) — Marilia Maciel: Thank you Bruna. I can take a couple of questions. Let me just say a few words about digital sovereignty…
S44
Advancing Scientific AI with Safety Ethics and Responsibility — “And I think, I think, So, just in terms of paradigm change that we are seeing and that you mentioned, is that there nee…
S45
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — – Alexander E. Brunner Decentralization as a key organizing principle Geller believes the current situation resembles …
S46
AI: Lifting All Boats / DAVOS 2025 — Dowidar points out that geopolitical restrictions on AI technology may lead to the development of alternative AI ecosyst…
S47
The rise of large language models and the question of ownership — What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate va…
S48
The strategic imperative of open source AI — This AI shift has been counterintuitive. Chinese companies historically favoured proprietary software, and Republicans w…
S49
The open-source gambit: How America plans to outpace AI rivals by democratising tech — The AI openness approach will spark a heated debate around the dual nature of open-source AI. The benefits are evident i…
S50
Host Country Open Stage — Nordhaug argues that digital public goods provide governments and organizations with greater control and sovereignty com…
S51
Artificial intelligence (AI) – UN Security Council — The discussion highlighted that open-source models enable a wide range of entities, from startups to larger corporations…
S52
Laying the foundations for AI governance — Artemis Seaford: That is a great question. So there is a misconception that companies do not want regulation. And maybe …
S53
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Tatjana Titareva: Thank you, Alex. I would like to say indeed that part of the roadmap is the need for capacity building…
S54
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Example of energy efficiency passes for houses in Germany and EU that are obligatory, making energy consumption transpar…
S55
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Audience: Thank you very much for organizing this discussion. My name is Silvia Dinica. I’m a Romanian senator, but also…
S56
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — And when we trust things, scale is possible. So usually when people talk about topics such as scale or, sorry, so trust …
S57
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Key barriers to scaling include the need for high-quality data foundations, reimagined business processes, and comprehen…
S58
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion — But you’re particularly fascinating on the African continent because you were way ahead of the AI curve in a sense. You …
S59
The Future of Public Safety AI-Powered Citizen-Centric Policing in India — Explanation:Both speakers view India’s massive scale as an advantage for AI implementation rather than a challenge, sugg…
S60
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — In terms of problem-solving, accurate identification of issues is crucial before determining effective measures. This pr…
S61
Deep divide in Silicon Valley exposed over AI risks and potential. — The events at OpenAI over the weekend of 17 November have highlighted a deep divide in Silicon Valley regarding the risk…
S62
China’s AI industry is transforming with open-source models, challenging the OpenAI proprietary approach — China’s AI landscape iswitnessinga profound transformation as it embraces open-source large language models (LLMs), larg…
S63
Global Perspectives on Openness and Trust in AI — Alondra Nelson, former deputy director of the White House Office of Science and Technology Policy, provided the panel’s …
S64
Global Perspectives on Openness and Trust in AI — Nelson advocates for returning to the foundational understanding of openness from the open source movement, which was ab…
S65
Building Trusted AI at Scale – Keynote Anne Bouverot — This comment shifts the discussion from acknowledging competition to actively proposing strategic alliances. It introduc…
S66
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Competition policy and advocacy play an important role, especially in developing countries, where competition authoritie…
S67
What is it about AI that we need to regulate? — The conflict between algorithmic transparency and trade secret protection emerged as a significant challenge across mult…
S68
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — The discussion was moderated by Chris Odu and featured panellists including Binty Mansaray (digital security auditor), A…
S69
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Lucia Russo:Maybe I’ll go first. And here, I’m not going to talk about the technical tools. I would go more broadly, I t…
S70
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S71
WAIGF Opening Ceremony & Keynote — The overall tone was formal yet optimistic. Speakers expressed enthusiasm about the potential of digital technologies wh…
S72
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S73
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S74
[Opening] IGF Parliamentary Track: Welcome and Introduction — The tone is consistently formal, welcoming, and optimistic throughout. It maintains a diplomatic and collaborative atmos…
S75
https://dig.watch/event/india-ai-impact-summit-2026/global-perspectives-on-openness-and-trust-in-ai — And I think it’s the case that to go, you know, to be reminded of a kind of broader sense of open reminds us that, you k…
S76
Open Forum #15 Building Bridges for WSIS Plus a Multistakeholder Dialogue — Participant: Thank you for the invitation to join this panel. I’m glad to be here. I think you know that the internet in…
S77
Networking Session #237 Enhancing Investor Advocacy a Multistakeholder Approach — The tone was professional yet conversational, with the facilitator (Audrey Mocle) deliberately shifting from a more form…
S78
Dynamic Coalition Collaborative Session — Avri Doria: Okay. So, I think when I look at the problem on the large scale is when I find myself getting pessimistic be…
S79
Charting the Course: Discussing the Impact and Future of the Internet Governance Forum — Carolina Caeiro:Yeah, absolutely. Yeah, perhaps I didn’t emphasize the impact of the intersessional work enough in my re…
S80
IGF Intersessional Work Session: DC — Pari Esfandiari: Thank you very much. Let me begin by saying a few words about the dynamic coalitions that I’m represent…
S81
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S82
Open Forum #37 Her Data,Her Policies:Towards a Gender Inclusive Data Future — Bonnita Nyamwire: Thank you. I would like to add on what Suzanne was explaining. So raising awareness on risks, benefits…
S83
What policy levers can bridge the AI divide? — The discussion maintained a collaborative and optimistic tone throughout, with participants sharing experiences construc…
S84
WS #225 Bridging the Connectivity Gap for Excluded Communities — The discussion maintained a professional but increasingly urgent tone throughout. It began optimistically with solution-…
S85
Keynote-Vinod Khosla — Overall Tone:The tone is consistently optimistic, urgent, and pragmatic. Khosla maintains an enthusiastic and confident …
S86
ICF 2023: Digital Commons for Digital Sovereignty | IGF 2023 Day 0 Event #82 — Audience:know who you are, and then we can proceed. Yes, definitely. I am Alexandre Costa Barboza. I’m a fellow at the W…
S87
Opening of the session — Singapore: Thank you Mr. Chair on behalf of my delegation I’d like to express our thanks to you and your team for the p…
S88
Global South Solidarities for Global Digital Governance | IGF 2023 Networking Session #110 — Aastha Kapoor:Great, can we get the screen on? Hi, welcome to the earliest possible session of this conference, day one,…
S89
WS #166 Breaking Barriers: Empowering Women in Internet Network — Diversity and inclusion efforts need to go beyond tokenism to create meaningful change
S90
Keynotes — At the European Dialogue on Internet Governance (EuroDIG) 2024, the imperative of multistakeholder collaboration in shap…
S91
Open Forum #60 Cooperating for Digital Resilience and Prosperity — The discussion maintained a consistently collaborative and constructive tone throughout. It was professional yet engagin…
S92
De-briefing and Next steps — An Egyptian delegate expressed appreciation to Sarah, the moderator, and to the organisers for hosting what was describe…
S93
Financing Broadband Networks of the Future to bridge digital — The dialogue initiates with the probe into Miss Adriana Labadini’s auditory capabilities, to which she eagerly responds …
S94
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — AlphaFold’s achievements in particular have had profound impacts on areas such as cancer treatment research and the crea…
S95
WS #208 Democratising Access to AI with Open Source LLMs — Audience: Is it working now? Yes, perfect. Hi. Thank you very much for your panel and the interesting discussion th…
S96
AI as critical infrastructure for continuity in public services — Trust is built locally and requires community-level engagement
S97
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Data platforms, vector databases, and edge computing capabilities are critical infrastructure layers often overlooked be…
S98
The Foundation of AI Democratizing Compute Data Infrastructure — Community participation and meeting people where they are builds trust in data infrastructure Local data ownership and …
S99
AI as critical infrastructure for continuity in public services — Trust building must occur at multiple levels simultaneously. While global frameworks provide necessary foundations, trus…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Alondra Nelson
4 arguments175 words per minute1527 words520 seconds
Argument 1
Openness as socio‑technical, not merely technical
EXPLANATION
Nelson argues that openness should be understood beyond the sharing of model weights or code; it includes the socio‑technical dimensions such as shared infrastructure, community control, and accountability.
EVIDENCE
She explains that while technical aspects like model weights, training data, and APIs are often highlighted, the broader notion of openness from the open-source movement is about shifting power, providing accountability, and enabling communities to modify and use technology for their own purposes [34-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nelson stresses that openness must be understood beyond code sharing, encompassing accountability, transparency and democratic participation as a socio-technical construct [S3][S12].
MAJOR DISCUSSION POINT
Broadening openness to include socio‑technical factors
AGREED WITH
Amba Kak, Karen Hao
DISAGREED WITH
Karen Hao
Argument 2
Openness framed as democratic practice versus binary policy
EXPLANATION
Nelson critiques the binary view of openness adopted by the current administration, emphasizing that openness is a gradient and a democratic practice that should involve public participation and transparency.
EVIDENCE
She notes that the Biden administration originally treated openness as a spectrum rather than a binary choice (open vs closed) and that the current stance treats openness as a finished state, ignoring the need for democratic accountability and community involvement [28-33][40-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She argues that openness is a spectrum and a democratic practice, not a binary open/closed choice, emphasizing public participation and transparency [S3][S12].
MAJOR DISCUSSION POINT
Opposing a binary conception of openness
Argument 3
Use of tariffs, trade, visas as heavy‑handed AI steering tools
EXPLANATION
Nelson points out that the U.S. is steering AI development through industrial levers such as tariffs, export controls, and costly immigration visas rather than through traditional regulation.
EVIDENCE
She cites specific levers including tariffs, trade policy, export controls on semiconductors, and the high cost of H-1B visas for high-tech workers as mechanisms the administration uses to shape the AI market [57-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nelson notes the U.S. relies on industrial levers such as tariffs, export controls and costly H-1B visas to shape AI development rather than traditional regulation [S3].
MAJOR DISCUSSION POINT
Industrial policy as a tool for AI governance
AGREED WITH
Ravneet Kaur, Amba Kak
Argument 4
Executive‑only AI policy reduces democratic input compared with rulemaking
EXPLANATION
Nelson argues that AI policies made by executive authority bypass the public notice and comment process that formal rulemaking provides, thereby limiting democratic participation.
EVIDENCE
She contrasts formal rulemaking, which includes public notices and calls for input, with AI policy enacted solely by executive authority that lacks these democratic opportunities [63-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She points out that executive-only AI actions bypass formal rulemaking’s public notice and comment processes, limiting democratic input [S3][S12].
MAJOR DISCUSSION POINT
Democratic deficits of executive‑only AI policy
DISAGREED WITH
Amba Kak
A
Amba Kak
2 arguments131 words per minute1825 words833 seconds
Argument 1
Moderator emphasizes openness as democratization and power shift
EXPLANATION
Kak frames the concept of ‘open’ as a stand‑in for broader democratic values such as participation, agency, and sovereignty, linking openness to the political economy of AI.
EVIDENCE
She observes that the word ‘open’ is doing a lot of work in the discussions, serving as a proxy for democratization, participation, agency, and even sovereignty [15-17].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Kak frames ‘open’ as a proxy for democratic values such as participation, agency and sovereignty, linking openness to a broader political economy of AI [S12].
MAJOR DISCUSSION POINT
Linking openness to democratic power redistribution
AGREED WITH
Alondra Nelson, Karen Hao
Argument 2
Moderator stresses need for more enforcement voices in AI governance
EXPLANATION
Kak calls for greater representation of enforcement agencies, such as competition authorities, at future AI governance forums to ensure accountability.
EVIDENCE
She remarks that the summit should include more enforcers so that players in the AI space are held accountable to the public and not above the law [253-254].
MAJOR DISCUSSION POINT
Increasing enforcement participation in AI governance
AGREED WITH
Alondra Nelson, Ravneet Kaur
DISAGREED WITH
Alondra Nelson
A
Anne Bouverot
2 arguments140 words per minute645 words275 seconds
Argument 1
Open source enables China and other nations to catch up and compete
EXPLANATION
Bouverot explains that open‑source software provides a strategic lever for countries like China to accelerate AI development and compete with the United States.
EVIDENCE
She references China’s use of open-source models such as DeepSeek and draws a parallel to Android’s role in enabling China to gain a seat at the AI table, noting that open source offers both benefits and risks while allowing challengers to close the gap [81-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bouverot highlights how open-source models like DeepSeek allow China and other challengers to accelerate AI development and narrow the gap with the U.S. [S12][S3].
MAJOR DISCUSSION POINT
Open source as a competitive equalizer for emerging AI powers
DISAGREED WITH
Astha Kapoor
Argument 2
“Coalitions of the willing” among middle economies foster collective AI governance
EXPLANATION
Bouverot proposes that middle‑power countries can collaborate in ad‑hoc coalitions to collectively shape AI governance and compete with larger powers.
EVIDENCE
She lists countries such as Canada, France, Germany, Switzerland, India, Japan, and Australia, emphasizing that these middle economies can form flexible coalitions to influence AI governance rather than a single monolithic bloc [89-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She proposes flexible coalitions of middle-power countries (e.g., Canada, France, Germany, India, Japan, Australia) to jointly shape AI governance rather than relying on a single bloc [S12][S23].
MAJOR DISCUSSION POINT
Cooperative governance among middle powers
DISAGREED WITH
Astha Kapoor
A
Astha Kapoor
1 argument185 words per minute852 words275 seconds
Argument 1
Global South risks of dependence on openness and need for agency
EXPLANATION
Kapoor warns that framing openness merely as a driver of adoption can trap Global South countries in dependency, diverting resources from structural challenges and turning them into test‑beds for external AI models.
EVIDENCE
She highlights structural issues in health and education, the allure of open data and multilingual datasets, and the danger that openness may shift focus away from needed investments, leaving Global South populations to do the labor of bringing people online without gaining control [111-119].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Kapoor warns that portraying openness merely as a driver of adoption can trap Global South nations in dependency, diverting resources from structural challenges and turning them into test-beds for external models [S12].
MAJOR DISCUSSION POINT
Openness as a potential source of dependency for the Global South
DISAGREED WITH
Anne Bouverot
A
Audience member 3
1 argument193 words per minute59 words18 seconds
Argument 1
Concerns about Chinese open‑source models carrying CCP influence
EXPLANATION
The audience member asks how to engage with Chinese open‑source AI models that may embed Chinese Communist Party perspectives, questioning whether they can be safely used.
EVIDENCE
The question explicitly raises the issue of Chinese open-source models potentially reflecting CCP ideology and seeks guidance on mitigating that influence [308-309].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nelson acknowledges that Chinese open-source AI may embed CCP perspectives, but argues true open-source models can be tuned to remove such ideological influences [S3][S12].
MAJOR DISCUSSION POINT
Political implications of Chinese open‑source AI
R
Ravneet Kaur
2 arguments168 words per minute1442 words512 seconds
Argument 1
Identification of anti‑competitive practices in AI markets
EXPLANATION
Kaur outlines several anti‑competitive behaviors observed in digital and AI markets, such as self‑preferencing, tying, bundling, exclusive agreements, and parity arrangements.
EVIDENCE
She enumerates specific practices including self-preferencing, tying and bundling, leveraging, exclusive agreements, and parity arrangements across sectors like search engines, mobile ecosystems, e-commerce, and social media platforms [134-138].
MAJOR DISCUSSION POINT
Cataloguing AI market anti‑competitive conduct
Argument 2
Competition as a lever for digital sovereignty and market contestability
EXPLANATION
Kaur argues that robust competition is essential for maintaining digital sovereignty, preventing entry barriers, and ensuring that consumers can move data and benefits across platforms.
EVIDENCE
She stresses that competition guarantees autonomy to deploy AI according to national priorities, prevents dominance and lock-in, and requires transparency and accountability from AI system providers [162-166].
MAJOR DISCUSSION POINT
Using competition to safeguard digital sovereignty
AGREED WITH
Alondra Nelson, Amba Kak
DISAGREED WITH
Audience member 6
A
Audience member 6
1 argument128 words per minute78 words36 seconds
Argument 1
Call for new tools to detect “open‑washing” and assess entry‑barrier effects
EXPLANATION
The audience member asks whether competition authorities need new analytical tools and frameworks to determine if claimed openness genuinely lowers barriers to entry or masks underlying dependencies.
EVIDENCE
The question explicitly requests guidance on assessing open-washing and whether competition frameworks must be reworked to capture hidden dependencies [369-374].
MAJOR DISCUSSION POINT
Need for analytical tools to evaluate open‑washing
DISAGREED WITH
Ravneet Kaur
K
Karen Hao
4 arguments171 words per minute1765 words618 seconds
Argument 1
Open, consent‑based Maori language speech model as participatory AI
EXPLANATION
Hao describes a project in New Zealand where a Māori‑language radio station collaborated with its community to build an open‑source speech‑recognition model, emphasizing consent, community education, and shared value.
EVIDENCE
She details how Tahiku Media engaged the Māori community, obtained consent, educated participants about AI, collected data, and iteratively co-designed applications, all built on the open-source Mozilla DeepSpeech model [184-202].
MAJOR DISCUSSION POINT
Community‑driven, consent‑based AI development
AGREED WITH
Alondra Nelson, Amba Kak
Argument 2
Redefining “scale” as distributed, community‑specific model development
EXPLANATION
Hao argues that true scale should mean many communities and industries developing their own models, not a single monopoly distributing a single model to everyone.
EVIDENCE
She contrasts the Silicon Valley notion of scale (single distributor) with a vision where multiple communities and sectors each develop models at scale, noting data-poor industries lack the resources for current large-scale models [207-211].
MAJOR DISCUSSION POINT
Reconceptualizing scale for inclusive AI diffusion
DISAGREED WITH
Alondra Nelson
Argument 3
Large‑scale open‑source LLM project illustrates collaborative data governance
EXPLANATION
Hao highlights a “big science” initiative that gathered thousands of researchers across many countries to create an open‑source LLM with transparent data governance and shared value for cultural institutions.
EVIDENCE
She mentions a project involving over a thousand researchers from 70 countries and 250 institutions that aimed to develop an open-source LLM while ensuring responsible data curation, transparency, and value return to contributing cultural institutions [179-182].
MAJOR DISCUSSION POINT
Collaborative, transparent development of large language models
Argument 4
Recognition of labor exploitation throughout AI data pipelines
EXPLANATION
Hao points out that labor exploitation occurs at multiple stages of AI development, from data collection to data‑cleaning, and calls for a fundamental redesign of the AI supply chain.
EVIDENCE
She notes that both the labor used to produce training data and the data-workers who clean that data are exploited, indicating that exploitation is embedded in the current model creation logic and requires a ground-up rethink [380-381].
MAJOR DISCUSSION POINT
Labor exploitation in AI data pipelines
A
Audience member 2
3 arguments190 words per minute256 words80 seconds
Argument 1
Questioning who is truly included in the “all‑inclusive” AI vision
EXPLANATION
The audience member asks which groups are actually encompassed by the summit’s claim of an all‑inclusive AI vision, highlighting perceived gaps.
EVIDENCE
She asks who is included in the all-inclusive vision, noting the limited presence of Chinese participants and questioning the breadth of inclusion [298-306].
MAJOR DISCUSSION POINT
Clarifying the scope of inclusivity in AI governance
Argument 2
Highlighting gender imbalance and the significance of an all‑female panel
EXPLANATION
The audience member points out that the panel is the only all‑female one at the summit, underscoring the broader gender imbalance in AI fields.
EVIDENCE
She remarks that the panel is the sole female-only panel at the summit, emphasizing its symbolic importance amid general gender disparities [295-298].
MAJOR DISCUSSION POINT
Gender representation in AI forums
Argument 3
Noting limited Chinese representation despite geopolitical relevance
EXPLANATION
The audience member observes the scarcity of Chinese participants, questioning whether the summit truly reflects global AI governance dynamics.
EVIDENCE
She notes the low visibility of Chinese participants despite China’s significant role in AI geopolitics [298-306].
MAJOR DISCUSSION POINT
Geopolitical representation gaps
A
Audience member 1
1 argument138 words per minute141 words61 seconds
Argument 1
Need for consumer‑facing labels to enable informed AI tool choices
EXPLANATION
The audience member suggests creating third‑party labeling systems that inform consumers about the values, resources, and ethical implications of AI tools, similar to labeling in other industries.
EVIDENCE
She proposes that third-party organizations develop clear, easy-to-understand labels indicating the values and resource usage of AI models, drawing parallels to labeling in fashion, food, and coffee sectors [277-282].
MAJOR DISCUSSION POINT
Consumer labeling for AI transparency
AGREED WITH
Karen Hao
A
Audience member 5
1 argument183 words per minute167 words54 seconds
Argument 1
Proposal for “protection‑by‑design” to safeguard data and IP rights
EXPLANATION
The audience member asks whether designing AI systems to protect intellectual property and data at the source is a viable approach and how it aligns with openness.
EVIDENCE
She describes research aimed at protecting publicly available data (images, websites, text) by making it unusable or harmful to AI models, framing it as a “protection-by-design” strategy and questioning its compatibility with openness [355-363].
MAJOR DISCUSSION POINT
Designing AI to protect data and IP
A
Audience member 4
1 argument140 words per minute57 words24 seconds
Argument 1
Discussion of IP limits and competition commission’s role in abuse cases
EXPLANATION
The audience member asks how openness interacts with intellectual property protections and what role the competition commission plays when IP is abused.
EVIDENCE
She explains that while copyrights and patents protect innovation, the competition commission intervenes only when there is an abuse, ensuring that innovation is not stifled and consumer welfare is protected [315-322].
MAJOR DISCUSSION POINT
Intersection of IP law and competition enforcement
Agreements
Agreement Points
Openness must be understood as a socio‑technical, democratic practice rather than merely sharing code or model weights.
Speakers: Alondra Nelson, Amba Kak, Karen Hao
Openness as socio‑technical, not merely technical Moderator emphasizes openness as democratization and power shift Open, consent‑based Maori language speech model as participatory AI
All three stress that openness involves community control, accountability, and democratic participation beyond technical sharing of model weights or code [34-42][15-17][184-202].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with Alondra Nelson’s framework that treats openness as a spectrum rooted in democratic values rather than simple code release, emphasizing socio-technical dimensions over binary sharing [S41][S42].
Open‑source approaches are a strategic lever to shift AI power and enable emerging actors to compete with dominant players.
Speakers: Alondra Nelson, Anne Bouverot, Karen Hao
Openness as socio‑technical, shifting power Open source enables China and other nations to catch up Large science open‑source LLM project illustrates collaborative governance
They agree that open source can democratize AI development, allowing challengers to narrow gaps with leading AI powers [40-42][81-86][179-182].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions highlight open-source as a geopolitical lever to create alternative AI ecosystems and level the playing field, as noted in UN Security Council remarks and analyses of strategic shifts in AI competition [S46][S48][S51].
Governments are steering AI primarily through non‑regulatory levers such as industrial policy and competition enforcement rather than formal rulemaking.
Speakers: Alondra Nelson, Ravneet Kaur, Amba Kak
Use of tariffs, trade, visas as heavy‑handed AI steering tools Competition as a lever for digital sovereignty and market contestability Moderator stresses need for more enforcement voices in AI governance
All point to the importance of policy tools outside traditional regulation-tariffs, export controls, costly visas, and competition policy-to shape AI markets and protect sovereignty [57-60][162-166][253-254].
There is a need for clear, third‑party labeling of AI models to enable informed consumer choices.
Speakers: Audience member 1, Karen Hao
Need for consumer‑facing labels to enable informed AI tool choices Third‑party labeling system for AI transparency
Both call for easy-to-understand labels that disclose values, resource use and ethical implications of AI tools [277-282].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for transparent, third-party labeling echo regulatory proposals for AI disclosures and consumer-facing transparency mechanisms similar to energy-efficiency labeling systems [S54][S53].
Similar Viewpoints
Open source is viewed as a strategic lever that can democratize AI power and allow emerging nations to compete with dominant AI producers [40-42][81-86].
Speakers: Alondra Nelson, Anne Bouverot
Openness as socio‑technical, shifting power Open source enables China and other nations to catch up
Both emphasize that policy instruments beyond formal rulemaking—industrial policy and competition enforcement—are central to AI governance [57-60][162-166].
Speakers: Alondra Nelson, Ravneet Kaur
Use of tariffs, trade, visas as heavy‑handed AI steering tools Competition as a lever for digital sovereignty and market contestability
Community‑driven AI development must be paired with safeguards to avoid dependency and exploitation of vulnerable groups [184-202][111-119].
Speakers: Karen Hao, Astha Kapoor
Open, consent‑based Maori language speech model as participatory AI Global South risks of dependence on openness and need for agency
Gender representation remains a critical concern; all‑female panels are rare and symbolically important [4-5][295-298].
Speakers: Amba Kak, Audience member 2
Female‑only panel highlighted Highlighting gender imbalance and significance of an all‑female panel
Unexpected Consensus
Decentralised, contestable AI ecosystems as an alternative to monopoly‑driven scale.
Speakers: Karen Hao, Ravneet Kaur
Redefining scale as distributed, community‑specific model development Competition as a lever for digital sovereignty and market contestability
A journalist focused on participatory AI and a competition regulator converge on the need for many independent actors and contestable markets, an alignment not anticipated given their different professional domains [207-211][162-166].
POLICY CONTEXT (KNOWLEDGE BASE)
The push for decentralized AI governance is reflected in panels advocating decentralized oversight, digital sovereignty, and public-good AI models to counter monopoly concentration [S44][S45][S46][S51].
Overall Assessment

The panel shows strong convergence on three themes: (1) openness must be socio‑technical and democratic; (2) open‑source is a lever for power redistribution and geopolitical competition; (3) governments are using industrial and competition tools rather than traditional regulation to steer AI. Additional cross‑cutting agreements include the need for consumer‑facing transparency and heightened attention to gender and inclusion.

High consensus across speakers on the nature of openness and the role of non‑regulatory policy levers, indicating a shared understanding that AI governance requires multi‑dimensional, community‑centric approaches and active enforcement mechanisms. This consensus suggests that future policy work can build on these common foundations to design inclusive, competitive, and transparent AI ecosystems.

Differences
Different Viewpoints
Definition of openness – binary vs. socio‑technical gradient
Speakers: Alondra Nelson, Anne Bouverot
Openness as socio‑technical, not merely technical Open source enables China and other nations to catch up and compete
Alondra argues that openness must be understood as a socio-technical, democratic gradient rather than a binary state, criticizing the current U.S. stance that treats openness as a finished condition [28-33][34-42]. Anne, by contrast, emphasizes open-source software primarily as a strategic lever that allows countries such as China to accelerate AI development and compete globally [81-86]. The two positions diverge on whether openness is a value-driven democratic practice or a pragmatic geopolitical tool.
POLICY CONTEXT (KNOWLEDGE BASE)
Scholars argue that openness should be seen as a gradient rather than a binary choice, a framing championed by Nelson and others in recent policy debates [S41][S42].
Openness for the Global South – empowerment vs. dependency risk
Speakers: Astha Kapoor, Anne Bouverot
Global South risks of dependence on openness and need for agency Open source enables China and other nations to catch up and compete “Coalitions of the willing” among middle economies foster collective AI governance
Astha warns that framing openness merely as a driver of adoption can trap Global-South countries in dependence, diverting resources from structural challenges and turning them into test-beds for external AI models [111-119]. Anne promotes open-source as a competitive tool for middle-power and emerging economies and suggests ad-hoc coalitions to harness openness for collective governance [88-92]. The disagreement centers on whether openness primarily empowers Global-South actors or creates new forms of reliance.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on digital sovereignty and public-good AI highlight both empowerment opportunities and risks of new dependencies for the Global South, as discussed in digital sovereignty forums and African AI ecosystem panels [S43][S50][S58].
What constitutes “scale” in AI development
Speakers: Karen Hao, Alondra Nelson
Redefining “scale” as distributed, community‑specific model development Openness as socio‑technical, not merely technical
Karen proposes that true scale means many communities and industries each developing their own models, rejecting the Silicon-Valley notion of a single monopoly distributing a model to everyone [207-211]. Alondra points out that many so-called open-source models (e.g., Llama 2/3) are only partially open and remain dominated by large players, suggesting that current scaling practices do not deliver democratic openness [43-45]. The two speakers disagree on how scale should be achieved and who should control it.
POLICY CONTEXT (KNOWLEDGE BASE)
The notion of “scale” has been examined in parliamentary testimonies and World Economic Forum panels, emphasizing data foundations, talent, trust, and governance as key dimensions of scaling AI beyond pilots [S55][S56][S57][S59].
Governance mechanism – executive‑only AI policy vs. formal rulemaking with democratic input
Speakers: Alondra Nelson, Amba Kak
Executive‑only AI policy reduces democratic input compared with rulemaking Moderator stresses need for more enforcement voices in AI governance
Alondra critiques the U.S. approach of steering AI through executive authority, which bypasses public notice and comment processes that provide democratic participation [63-66]. Amba calls for greater presence of enforcement agencies (e.g., competition authorities) at AI governance forums to ensure accountability and public oversight [253-254]. The disagreement lies in the preferred governance pathway-executive levers versus rule-based, multi-stakeholder processes.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between executive-driven AI strategies and inclusive rulemaking is reflected in calls for broader stakeholder participation and capacity-building in AI regulation [S52][S44][S45].
Adequacy of existing competition tools to detect “open‑washing”
Speakers: Ravneet Kaur, Audience member 6
Competition as a lever for digital sovereignty and market contestability Call for new tools to detect “open‑washing” and assess entry‑barrier effects
Ravneet explains that competition assessments are conducted case-by-case, using detailed analysis of market data and public information to identify anti-competitive harms [382-388]. The audience member asks whether new analytical tools and possibly revised competition frameworks are needed to determine if claimed openness genuinely lowers entry barriers or merely masks hidden dependencies [369-374]. This reflects a disagreement on whether current competition methodologies are sufficient.
POLICY CONTEXT (KNOWLEDGE BASE)
UNCTAD’s review questions whether current competition law instruments can effectively identify open-washing, suggesting a need for updated antitrust tools [S60].
Unexpected Differences
Positive view of Chinese open‑source models vs. concerns about CCP influence
Speakers: Anne Bouverot, Audience member 3, Alondra Nelson
Open source enables China and other nations to catch up and compete Concerns about Chinese open‑source models carrying CCP influence Openness framed as democratic practice versus binary policy
Anne praises Chinese use of open-source AI as a legitimate way for China to gain a seat at the table and compete globally [81-86]. An audience member, however, raises the unexpected worry that these models may embed Chinese Communist Party perspectives, questioning their suitability for broader adoption [308-309]. Alondra adds that geopolitical justifications are sometimes used to avoid the socio-technical aspects of openness, hinting at the tension between technical openness and political control [46-48]. This clash between a panelist’s optimistic framing of Chinese open-source and the audience’s security-political concerns was not anticipated in the panel’s discourse.
POLICY CONTEXT (KNOWLEDGE BASE)
Analysts note the strategic shift of Chinese firms toward permissive open-source licenses while warning of potential state influence, illustrating the dual perception of openness versus geopolitical risk [S48][S62].
Overall Assessment

The panel displayed substantial disagreement on how openness should be defined and operationalised, on the role of openness for Global‑South and middle‑power countries, on the appropriate notion of scale for AI models, and on the governance mechanisms best suited for democratic AI policy. While all participants agreed on the importance of inclusive, accountable AI, they diverged sharply on whether openness is primarily a democratic value, a strategic geopolitical tool, or a community‑driven practice, and on the adequacy of existing competition frameworks to police “open‑washing”.

High – The disagreements cut across core themes (openness, governance, competition, and geopolitical inclusion), indicating that consensus on policy direction will require bridging divergent framings of openness and reconciling competing priorities of democratic participation, national competitiveness, and protection against hidden dependencies.

Partial Agreements
All four speakers share the overarching goal of making AI more inclusive, democratic and beneficial for broader societies. However, they diverge on the pathways: Alondra stresses a socio‑technical, democratic framing of openness [34-42]; Anne focuses on open‑source as a strategic lever for national competitiveness and coalition‑building [81-86][88-92]; Karen highlights community‑driven, consent‑based projects as the model for inclusive AI [184-202]; Astha cautions that unchecked openness can create dependency for Global‑South nations and calls for agency‑centred approaches [111-119].
Speakers: Alondra Nelson, Anne Bouverot, Karen Hao, Astha Kapoor
Openness as socio‑technical, not merely technical Open source enables China and other nations to catch up and compete Open, consent‑based Maori language speech model as participatory AI Global South risks of dependence on openness and need for agency
Takeaways
Key takeaways
Openness in AI should be understood as a socio‑technical, democratic practice rather than a simple binary technical choice. U.S. AI governance is increasingly driven by industrial policy tools (tariffs, export controls, visa costs) which, while heavy‑handed, reduce formal democratic rule‑making input. Open‑source AI serves as a strategic lever for both China and middle‑power coalitions, enabling catch‑up but also raising concerns about dependence and geopolitical influence. Middle‑power “coalitions of the willing” can collectively shape AI governance, share resources, and bolster digital sovereignty. Community‑driven, consent‑based AI projects (e.g., Maori language model) demonstrate alternative, inclusive pathways to scale that prioritize local value and participation. Competition policy is a critical tool for safeguarding digital sovereignty, preventing ecosystem lock‑in, and ensuring contestable markets. There is a need for stronger enforcement voices (competition authorities, regulators) in AI governance forums. Gender and geographic representation remain uneven; the all‑female panel highlighted the importance of inclusive participation. Labor exploitation and data‑worker rights are pervasive across the AI supply chain and require systemic redesign. Consumers lack clear, third‑party labeling of AI products, limiting informed choice; a labeling framework is suggested.
Resolutions and action items
Develop and publish a third‑party AI‑product labeling system to convey openness, data provenance, and ethical attributes. Competition Commission of India to continue its AI market study, incorporate transparency and accountability checkpoints across the AI lifecycle, and share findings publicly. Encourage the creation of digital public infrastructure (compute platforms, open datasets) to support startups and small‑language models, especially in the Global South. Facilitate the formation of ad‑hoc “coalitions of the willing” among middle‑power nations to co‑fund and co‑govern open‑source AI initiatives. Integrate community participation mechanisms into AI conferences and policy processes, moving beyond traditional academic‑only formats. Explore regulatory tools to detect and curb “open‑washing” practices in AI markets.
Unresolved issues
How to balance openness with security concerns for high‑risk AI applications (e.g., nuclear‑related AI). Concrete mechanisms to restore democratic input in U.S. AI policy when decisions are made via executive or industrial‑policy levers. Methods for mitigating the risk that Chinese open‑source models embed CCP ideological biases while remaining technically open. Design and governance of an effective, globally recognized AI labeling standard; who would oversee it and how it would be enforced. Specific strategies to protect data‑workers and address labor exploitation throughout the AI data pipeline. How to operationalize competition as a sovereignty tool without stifling innovation, especially in emerging markets. Ways to ensure broader gender and geographic inclusion in future AI governance summits and decision‑making bodies.
Suggested compromises
Treat openness as a gradient rather than a binary, allowing selective restriction for high‑risk domains while keeping broader socio‑technical openness. Combine industrial‑policy tools with formal rule‑making processes to retain democratic participation while achieving strategic objectives. Allow middle powers to adopt open‑source tools collaboratively while establishing safeguards against dependence on any single nation’s ecosystem. Promote small‑scale, community‑centric AI development as a complement to large‑scale models, thereby redefining “scale” to include distributed, localized solutions. Implement transparency requirements for AI providers (technical and governance) as a middle ground between full openness and proprietary secrecy.
Thought Provoking Comments
Follow-up Questions
How can democratic input be ensured in AI policy when decisions are made through executive authority, industrial policy, and trade policy rather than formal rulemaking?
Alondra highlighted that current U.S. AI governance relies on non‑regulatory levers, which reduces public participation and accountability, indicating a need to explore mechanisms for democratic oversight.
Speaker: Alondra Nelson
What are the unique forms of leverage and dependence for Global South countries like India in the current AI environment, and how can they combat this dependence?
Amba asked Astha about the specific challenges faced by Global South nations, and Astha noted the risk of being merely test‑beds, suggesting further investigation into strategies for reducing reliance on external AI providers.
Speaker: Amba Kak, Astha Kapoor
Can competition be used as a lever in the sovereignty toolkit for global‑majority countries to retain control over AI development and deployment?
Amba queried Ravneet on whether competition policy can support AI sovereignty, prompting discussion of competition’s role in preventing market lock‑in and ensuring fair access.
Speaker: Amba Kak, Ravneet Kaur
Who is included in the vision of an all‑inclusive AI future, and why is Chinese representation low in AI governance discussions?
The participant questioned the inclusivity of current AI governance frameworks and noted the under‑representation of Chinese voices, indicating a need to examine broader stakeholder inclusion.
Speaker: Audience member 2
How should open‑source Chinese AI models, which may embed CCP perspectives, be leveraged appropriately in the global AI ecosystem?
The audience member raised concerns about ideological biases in Chinese open‑source models and asked how they can be responsibly used, pointing to a gap in understanding governance of such models.
Speaker: Audience member 3
Is ‘protection by design’—making publicly available data unusable or harmful to AI models—a viable approach to safeguard intellectual property, and how does it align with openness principles?
The question seeks research on technical methods to protect data while balancing openness, highlighting tension between data protection and open AI development.
Speaker: Audience member 5
How should competition authorities assess whether ‘open‑washing’ genuinely lowers entry barriers, and do they need new analytical tools or revised frameworks?
The participant called for methodological innovation in competition law to detect superficial openness, suggesting a research agenda for regulatory tools.
Speaker: Audience member 6
What mechanisms can create third‑party labeling systems to inform consumers about the values, resource use, and openness of AI models?
Karen suggested the need for independent labeling akin to those in fashion or food industries, indicating a research direction for consumer‑focused transparency tools.
Speaker: Karen Hao
How can transparency around data‑center and cloud infrastructure be improved to address community concerns and enhance openness of the AI stack?
Alondra pointed out that data‑center siting often occurs behind NDAs, revealing a gap in public knowledge that warrants further investigation.
Speaker: Alondra Nelson
What are effective ways to build and sustain ‘coalitions of the willing’ among middle‑power countries to promote open‑source AI and shared governance?
Anne emphasized the potential of ad‑hoc alliances but left open how to operationalize such coalitions, suggesting a need for strategic research.
Speaker: Anne Bouverot
How can digital public infrastructure—such as compute platforms and multilingual datasets—be developed to support small‑scale, language‑specific AI models in countries like India?
Ravneet highlighted the importance of public compute and data resources for local AI development, indicating a research area on building national AI infrastructure.
Speaker: Ravneet Kaur
What governance frameworks are needed to ensure transparency and accountability throughout the AI system lifecycle, especially regarding data access, compute, and skill‑set development?
Ravneet called for comprehensive governance across the AI lifecycle, pointing to a need for policy research on transparency mechanisms.
Speaker: Ravneet Kaur
How can AI conferences be redesigned to meaningfully include community members, students, and non‑technical stakeholders, moving beyond traditional academic formats?
Alondra reflected on the unique community‑centric nature of this summit and suggested rethinking conference formats, indicating a research direction on inclusive event design.
Speaker: Alondra Nelson
How can cooperatives serve as co‑designers and recipients in AI development to ensure one‑member‑one‑vote governance and equitable value distribution?
Astha mentioned cooperatives as a promising governance model but did not detail implementation, highlighting a gap for further study.
Speaker: Astha Kapoor
How can the AI community detect and mitigate corporate ‘open‑washing’ where companies adopt inclusive language while locking users into closed platforms?
Karen warned about sophisticated corporate rhetoric that masks closed ecosystems, suggesting research into detection and counter‑strategies for open‑washing.
Speaker: Karen Hao
How can more enforcement agencies and regulators be included in AI governance discussions and future summits to ensure accountability?
Amba called for greater representation of enforcers at future events, indicating a need to explore mechanisms for regulator participation.
Speaker: Amba Kak

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

National Disaster Management Authority

National Disaster Management Authority

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how AI can be institutionalized within national disaster risk reduction frameworks to build resilient governance amid rising disaster frequency, intensity, and complexity, while noting that unprecedented advances in AI present new opportunities [2][3][4][5][7]. The Minister of IT from Mauritius argued that disasters now affect both the physical and virtual worlds, urging policies that create digital twins to link them and insisting on human-in-the-loop decision making for early-warning alerts, especially in low-resource settings [30-33][36-38][46-48][53-55].


Beth Woodham of the UK Met Office described the development of hybrid weather models that blend physics-based and machine-learning outputs, emphasizing co-development of models, benchmarks, and evaluation metrics with partners such as India to build user trust [65-71][73]. Som Satsangi highlighted India’s shortage of high-performance computing-current supercomputers total ~40 petaflops versus the exaflop scale used by U.S. systems-and argued that the massive cost of required infrastructure, power, and cooling necessitates public-private partnerships [92-100][101-108][111-118][121-124].


Pankaj Shukla outlined a five-layer AI architecture (infrastructure, operating system, services, models, applications) and explained how cloud providers can deliver a centralized “living intelligence” while supporting edge or air-gapped deployments for low-connectivity, high-risk environments [136-152]. Nikhilesh Kumar illustrated how startups can assemble data, asset, and workflow layers into Disaster Prediction and Governance platforms, using AI to nowcast dam inflows from satellite and radar data and to extract structured hazard information from unstructured news sources [155-166][169-172].


Dr. Mrutyunjay Mohapatra noted that AI is being integrated as a hybrid complement to physical models at the Indian Meteorological Department, but limited computing capacity (~28 petaflops) hampers full use of satellite data, prompting exploration of low-cost GPU-based “box” models for smaller nations [189-197][202-207][208-210]. Dr. Krishna Vatsa explained that NDMA possesses extensive hazard observations and plans to expand automated weather stations and seismometers, yet struggles with processing this data, integrating AI models, and linking central data centers with local early-warning agencies [220-229][230-238][239-247].


Across the discussion, participants agreed that effective AI-driven DRR requires institutional reforms, interoperable sovereign data architectures, robust computing infrastructure, and sustained collaboration between governments, academia, and the private sector [7][80][111][146][213]. They also stressed that human oversight must remain central to avoid over-automation and to ensure trust in AI-generated alerts [46-48][53-55]. The panel concluded that scaling AI for disaster resilience in India and similar contexts hinges on coordinated policy, investment in infrastructure, and inclusive data ecosystems that can operate both centrally and at the edge [7][111-118][146-152][219-227][230-236]. For low-resource nations, affordable, modular AI solutions-such as the box model and community-driven DPIs-can deliver early warnings without requiring massive supercomputing [207-210][169-172].


Keypoints


Major discussion points


Policy reforms must bridge the physical and virtual worlds and keep humans in the loop.


The Minister of IT from Mauritius emphasized that disasters now include cyber-attacks and that a “digital twin” linking physical infrastructure to virtual systems is essential, while stressing that “human-in-the-loop” decision-making cannot be fully automated for life-saving alerts [30-33][34-42][45-47][48-55].


National meteorological services should adopt hybrid AI-physics models through co-development and joint benchmarking.


The UK Met Office explained its strategy of gradually blending machine-learning outputs with traditional physics-based forecasts, and highlighted the need for shared standards and evaluation metrics with partners such as India to build user trust [65-71][72-74].


Scaling AI for disaster risk reduction requires massive, sovereign compute infrastructure and sustainable resources.


Hewlett-Packard Enterprise’s representative outlined India’s current super-computing shortfall (≈ 40 petaflops vs. the 1-2 exaflop capacity of leading U.S. systems), the high cost of building exaflop-class facilities, and the parallel need for power, water-cooling and public-private financing [92-100][101-110][111-119][120-127].


A layered cloud-edge architecture is needed to deliver real-time, low-connectivity AI intelligence and ensure data security.


Google Cloud’s head described a five-layer stack (infrastructure, operating system, platform services, multimodal models, and applications) that can operate centrally, at regional hubs, or on rugged edge devices in “zero-trust” air-gap mode, enabling actionable insights even when disconnected from the core network [136-152][153-158].


Start-ups can accelerate population-scale DRR by building Data-Product-Interfaces (DPIs) and Data-Product-Governance (DPG) layers that fuse heterogeneous agency data into actionable workflows.


Vassar Labs’ CEO illustrated how AI-driven now-casting of millions of water bodies, automated risk extraction from unstructured news, and the creation of interoperable datasets feed into national-level platforms and insurance products [155-168][169-172].


Overall purpose / goal of the discussion


The panel was convened to explore how AI can be institutionalized within national disaster-risk-reduction (DRR) architectures-spanning policy, data governance, computational infrastructure, and operational workflows-to create resilient, scalable early-warning and response systems, with a particular focus on the challenges and opportunities for India and other resource-constrained nations.


Tone of the discussion


The conversation began in a formal, forward-looking tone, emphasizing the urgency of integrating AI into disaster governance. As technical experts spoke, the tone shifted to a detailed, problem-solving mode, highlighting concrete infrastructure gaps and the need for standards, partnerships, and human oversight. Throughout, the mood remained collaborative and constructive, ending on an optimistic note about leveraging private-sector innovation and international cooperation to achieve scalable, low-cost AI-enabled resilience.


Speakers

Speaker: Moderator


Role/Title: Session Moderator


Areas of Expertise: Facilitating discussions on disaster risk governance and AI integration


Citation: [S12]


Speaker: Avinash Ramtohul


Role/Title: His Excellency Dr. Avinash Ramtohul, Minister for Information Technology, Communication and Innovation, Republic of Mauritius


Areas of Expertise: Disaster risk governance, AI-enabled early warning systems, cybersecurity, policy reform for AI in resilience


Citation: [S14]


Speaker: Beth Woodhams


Role/Title: Senior Manager, UK Met Office


Areas of Expertise: Disaster risk reduction, weather forecasting, AI and machine-learning model blending, meteorological services, international co-development of AI tools


Citation: [S8]


Speaker: Som Satsangi


Role/Title: Former Senior Vice President and Managing Director, Hewlett Packard Enterprise India


Areas of Expertise: AI deployment in geospatial and climate analytics, large-scale computing infrastructure, sovereign data architectures, procurement & policy for AI in disaster management


Citation: [S2]


Speaker: Nikhilesh Kumar


Role/Title: CEO and Co-founder, Vassar Labs


Areas of Expertise: AI for disaster risk reduction, startup-driven hazard modeling, data-driven decision workflows, real-time satellite and radar nowcasting


Citation: [S5]


Speaker: Pankaj Shukla


Role/Title: Head of Customer Engineering, Google Cloud India


Areas of Expertise: Cloud AI platforms, real-time analytics at scale, hazard mapping, predictive analytics, low-connectivity AI deployment, misinformation mitigation


Speaker: Dr. Mrutyunjay Mohapatra


Role/Title: Director General, India Meteorological Department (IMD)


Areas of Expertise: Meteorology, early warning systems, AI-enhanced weather modeling, hybrid physical-AI forecasting frameworks


Citation: [S17]


Speaker: Dr. Krishna Vatsa


Role/Title: Head of Department, National Disaster Management Authority (NDMA)


Areas of Expertise: Disaster management, large-scale data integration, early warning architecture, capacity building for AI-driven risk information


Citation: [S16]


Additional speakers:


– None identified beyond the participants listed above.


Full session reportComprehensive analysis and detailed insights

The panel opened by framing disaster-risk governance as a defining moment, noting that the frequency, intensity and complexity of hazards are rising worldwide due to climate variability, rapid urbanisation and cascading events [2-5]. At the same time, unprecedented advances in artificial intelligence (AI) are creating new opportunities for resilience, prompting the central question of how India can develop an AI-enabled model for disaster risk reduction (DRR) [7].


Policy reforms that bridge the physical and virtual worlds


His Excellency Dr Avinash Ramtohol, Minister for Information Technology, Communication and Innovation, Republic of Mauritius, argued that disasters now affect both the tangible environment and the digital ecosystem, with cyber-attacks capable of causing comparable havoc to floods or cyclones [26-33]. He proposed institutionalising a “digital twin” that maps physical infrastructure (e.g., fire-service routes, pipe networks) into a virtual counterpart accessible to emergency operators, thereby creating a bridge between the two realms [34-42]. During the discussion he asked whether a “heartbeat indication” or thermal map could locate people inside a building during a fire, highlighting the potential of sensor-fusion for life-saving situational awareness. He also likened computer viruses to biological viruses, warning that infected warning messages could cause disruption [44-46]. The Minister stressed that any AI-driven early-warning system must retain a “human-in-the-loop” to avoid fully automated decisions that could endanger lives [45-47]; this is reflected in Mauritius’ policy to deploy a cell-broadcast early-warning system whose messages must be human-verified before transmission [52-55]. He warned that malicious code could corrupt warning messages, underscoring the need for robust cybersecurity safeguards [44-46].


The moderator reinforced this governance angle, noting that institutionalising AI within national resilience architectures must also prevent “alert fatigue” and ensure that alerts remain meaningful and actionable [56-57].


Hybrid AI-physics forecasting for national meteorological services


Beth Woodham of the UK Met Office described a pragmatic roadmap in which machine-learning weather models are blended with traditional physics-based systems rather than replacing them outright [65-68]. The Met Office plans to increase the proportion of blended output as confidence grows, while co-developing benchmarks and evaluation metrics with partners such as India to build user trust [69-74]. Dr Mrutyunjay Mohapatra outlined the Indian Meteorological Department’s (IMD) current practice of using AI as a hybrid complement to physical models, acknowledging that AI can improve data quality and forecast skill but is limited by existing compute capacity [189-197]. Together, the speakers highlighted that hybrid AI-physics models are essential and should be introduced incrementally with joint benchmarking to build trust [65-71][189-197].


Infrastructure bottlenecks and the need for sovereign compute


Som Satsangi (former SVP, Hewlett-Packard Enterprise India) highlighted a stark shortfall in India’s high-performance computing (HPC) capability: the nation’s supercomputers total roughly 40 petaflops, whereas leading U.S. systems operate at 1-2 exaflops (≈ 1 000 petaflops) [92-100]. He argued that delivering real-time AI analytics for geospatial and satellite data will require exaflop-scale facilities, each costing US $400 million-$1 billion and demanding substantial power, water-cooling and specialised procurement processes [101-108][111-119][120-127]. Consequently, he called for public-private partnerships to share the financial and technical burden. Satsangi emphasized the need for exaflop-scale central supercomputers, while Mohapatra highlighted the potential of lower-cost GPU-based box model solutions for smaller or resource-constrained settings [96-100][207-208].


A five-layer cloud-edge architecture for low-connectivity, high-risk environments


Pankaj Shukla (Head of Customer Engineering, Google Cloud India) presented a layered AI stack comprising (1) infrastructure, (2) an operating system that spans central, regional and edge sites, (3) platform services, (4) multimodal models (e.g., Gemini), and (5) agentic applications [136-141]. This design creates a “living intelligence” that can be federated to rugged, air-gapped edge devices, enabling zero-trust operation and real-time decision support even when connectivity is lost [142-152][153-158]. The architecture directly addresses the power-and-connectivity constraints raised by Satsangi while offering a distributed alternative to a single national supercomputer.


Consensus on a sovereign, interoperable data ecosystem


Each of the speakers-Satsangi, Shukla, and Kumar-underscored the need for a sovereign, interoperable data architecture that can ingest heterogeneous agency data (meteorological, hydrological, asset registers) and feed AI models across central and edge locations [111-113][136-152][155-166]. Kumar illustrated how his startup assembles four layers-modelling, assets, people, and workflows-to transform scattered data into actionable, personalised warnings. He demonstrated an AI-driven now-casting pipeline that ingests 30-minute satellite and radar data for ≈ 1 million water bodies and, in a recent cyclone event, delivered real-time alerts to about 5,000 dams[155-166]. He also described AI-driven extraction of structured hazard information from unstructured news, a capability that can feed Disaster Prediction Interfaces (DPIs) and Disaster Prediction Generators (DPGs) and support parametric insurance products [169-172].


Expanding observational networks and the data-processing bottleneck


Dr Krishna Vatsa, head of India’s National Disaster Management Authority (NDMA), outlined an ambitious expansion of the country’s sensor infrastructure: automated weather stations in every village, a quadrupling of seismometers and a large increase in landslide instrumentation [220-229]. While data collection is set to surge, Vatsa identified a critical gap in processing capacity and an unclear roadmap for linking central data centres with local early-warning agencies [230-238][239-247]. He warned that without a coherent integration strategy, investments in observation networks may not translate into timely, citizen-focused alerts.


Key take-aways


The panel broadly agreed on the importance of hybrid AI-physics models, human oversight, interoperable sovereign data architectures, and public-private partnerships for financing compute infrastructure [65-71][45-47][111-113][136-152][155-166][230-238].


Unresolved issues and future research directions


Challenges that remain open include: financing mechanisms and procurement policies for exaflop-scale supercomputers and their associated energy and water-cooling needs [106-108][120-127]; legal and governance frameworks that balance data sovereignty with collaborative benchmarking [71-73]; standardised metrics for AI explainability and accountability when life-saving decisions are automated [72-74][45-46]; strategies to prevent misinformation and ensure authenticity of AI-generated warnings, especially in low-connectivity settings [153-158]; and detailed operational plans for federated edge deployments that maintain zero-trust security during network outages [142-152][153-158].


Overall assessment


The panel collectively affirmed that AI can substantially enhance disaster resilience, but realising this potential requires (i) institutional reforms that embed AI within national resilience architectures while preserving human oversight, (ii) hybrid modelling that blends AI with physics-based forecasts, (iii) massive yet sustainable compute resources-whether centralised supercomputers or distributed GPU-based box models-and (iv) a sovereign, interoperable data ecosystem that links observation networks, data centres and frontline agencies. By aligning policy, infrastructure and private-sector innovation, India and other resource-constrained nations can move toward scalable, inclusive AI-enabled early-warning systems that are robust to both physical hazards and cyber-threats [30-33][65-71][92-100][136-152][155-166][220-229].


Overall, the discussion highlighted that realizing AI-enabled DRR at population scale will require coordinated policy reforms, scalable compute infrastructure (whether centralised or distributed), robust data ecosystems, and sustained public-private collaboration [1-250].


Session transcriptComplete transcript of the session
Moderator

defining moment for disaster risk governance. Around the world, the frequency, intensity, and complexity of disasters are increasing. Climate variability is compounding existing vulnerabilities. Urbanization is concentrating risk, and cascading hazards are challenging traditional response models. At the same time, we are witnessing unprecedented advances in AI. So, at this point of time, how does India bring or develop a model with AI for resilience? We believe that the next frontier in DRR is not better algorithms alone, it is institutionalizing AI within national resilience architecture. Thank you very much. Thank you very much. from pilot projects to national and global resilience systems. Before we start the discussion, let me invite and call on the stage for the panel discussion His Excellency Dr.

Avinash Ramtohol, the Minister for Information Technology, Communication and Innovation from the Republic of Mauritius. Welcome, sir. He is a key contributor to national strategies for AI in resilient infrastructure and South -South cooperation. I would like to invite Ms. Beth Woodham, Senior Manager from UK Met Office. She is a specialist in disaster risk reduction via forecasting innovations and AI explorations for prediction. Welcome. I would like to invite Mr. Som Satsangi, former SVP and Managing Director for Hewlett Packard Enterprise India with industry insights on AI deployment in geospatial and climate analytics. Welcome, Mr. Som. I would like to now call upon Mr. Nikhilesh Kumar, CEO and co -founder of Vassar Labs. He is an innovator in leveraging AI for DRR.

Welcome, Nikhilesh. And lastly, Mr. Pankaj Shukla, Head of Customer Engineering, Google Cloud India, for Practical AI Applications, Hazard Mapping, Predictive Analytics, and EWS Scale -Up. Thank you. Thank you. and focus on double integration during this panel discussion. So my question first to Minister for IT, Communication and Innovation, Republic of Mauritius. Minister, the small island developing states face existential climatic threats. From your perspective, what policy reforms are required to institutionalize AI -enabled early warning and alerting systems within national governance frameworks? And how can countries with limited resources ensure sustainability in such ventures?

Avinash Ramtohul

Thank you and good morning, everybody. Thank you for the opportunity to be here amongst you. First of all, I would like to say a couple of points before I get into the actual response there. Today, just like we have the physical world, we have a virtual world as well in which we all live. And that virtual world is so much bigger. Than the physical world we can see here in front of us at the moment. And just like disaster can strike the physical world, and that is the scope of the discussion, disaster can also strike the virtual world. And as we grow in dependency on the virtual world, on our digital systems, we should be well aware that disaster is not just the flood, the cyclone, the drought.

Disaster can also be the cybersecurity attacks that can actually create havoc in our lives. Therefore, it is very important that the scope of the discussions when we look at disaster be also extended to the virtual world and cybersecurity attacks. Now, having said this, in terms of policy reform, it is very important that we also create this bridge between the physical world and the virtual world. And I will explain myself. Just imagine, as we speak here, there is a big fire that breaks out in one organization. There is a big fire that breaks out in one organization. and because it broke out there are you know automated connections that go to the fire services to the medical services they will proactively now start driving to this place but when they come to this place where would they know where are the people because their main objective is to save the lives of the people secondary the material where would they know where are the people do they have a plan a structural plan of this this space do they know where do the pipes cross now I’m talking about a digital twin it is really important that we create that digital twin which will be the bridge between the physical world and the virtual world and the architectural map of that digital twin should be accessible to a certain set of operators the medical the fire services now this is part of the reform that we are looking at And in a small administration, it becomes easier to do it, as opposed to a huge administration like India.

Now, there’s one more thing in there. As, let’s say, we also have the structural plan, how do we know where the people are? Can we have heartbeat indication? Can we have the thermal map of the place so that we know wherever there’s 37, 38, 39 degrees, well, 37, 38 is better. Do we know where the people are located so that when the fire services come, they go straight to that spot? So this is very important. And another reform that is important that we be aware of is that when there is some kind of a pandemic, there is, and which is contagious, there is human -to -human virus transfer. Now, we are all very excited. We are very excited about artificial intelligence.

intelligence but we are also aware that there is this possibility of virus infecting systems right and just like virus infects people virus also infects systems and virus get contagious in computers as well we all know that therefore we need to also have mechanisms to protect because if we have a message that goes through an early warning system to people this already creates an alert in the minds of people the adrenaline surge starts already but if that message is infected it can create a lot of disruption in our daily lives and this we need to be very careful of therefore in terms of reform the decision making process and i think it was mr somebody earlier mentioned in the previous panel the decision making process is automated now automation can be and 100 % automation in the field of AI where it concerns the lives of people can be dangerous.

Therefore, human in the loop or human on the loop is critical in these kinds of environments. And this is also part of what we are looking at in Mauritius. Yes, it’s true that as a small island developing state, we call it SIDS, we have our own set of flash floods that can actually occur. Within a couple of hours, we can have flash floods and we can see cars floating around already. And this has happened in the country. And we don’t want that to happen again. Therefore, there are early warning systems that we are deploying, like cell broadcast systems, which we have planned to deploy. Now, again, the message that goes into that system should be a message that is human verified.

That is, decisions like these that are sensitive, highly sensitive, cannot be 100 % automated. That’s part of our policy. as well and want to ensure that humans are involved because machines cannot decide for humans human decide for machines and this needs to be critical and needs to be given the attention that it deserves and I believe our Prime Minister Modi ji also mentioned in his intervention yesterday that there is a great necessity to ensure that human remain part of the decision -making process in the application of AI for disaster management so these are a few points I wanted to mention thank you

Moderator

insights in developing resilient governance frameworks which actually are scalable across the nation’s and this is a way to go resilient system which is resilience to even cyber attacks and of sustainable meaningful and not giving any say fatigue you you of alerts also is very much vital for a robust system to be effective across all disasters. So we now come to the second panelist, Ms. Woodham. My question to you is the national meteorological agencies play a crucial role in operational forecasting and early warning delivery. From the perspective of the UK Met Office, how can AI complement physical weather and climatic models to improve forecast, lead time, and impact, basically impact -based warnings to gain public trust?

And what institutional partnerships are necessary to ensure that AI -driven meteorological insights translate into actionable decisions and actions at national and local levels, with special emphasis on low -resource countries?

Beth Woodhams

Hello? Yeah. Right. Thank you. Thank you for your question. Thank you, it’s a real honour to be part of this panel. So at the Met Office we are currently developing machine learning weather models and we absolutely do not see these as a replacement for our physical models. Our plan over the coming years is to step by step implement these models through blending. This could be hybrid models, physics based and machine learning based. It could be blending the output from both of these models after they’ve run. The truth is we don’t know what the answer to this solution is yet but in order to build the trust amongst the users of our models, the customers of our data we’re certainly not going to have a complete shift.

We are going to do this step by step increasing our blending. As we become more confident with the data. so it’s from this conference you know it’s very clear that um companies from the private sector are developing these models in the public sector of course we’re developing them too and sovereign capability remains really important but for public sectors we really need to have that co that co -development um at the met office we have a long history of co -developing with partners like india so through wc ssp india and through age um wiser asia pacific we have these partnerships we’ve co -developed physics -based models and we really want to do the same with machine learning models as well at the met office we’re starting standardizing our benchmark and evaluation benchmarking and evaluation we really want to make sure that when we’re doing comparisons between machine learning and physics -based models we’re being focused on the same thing so we’re really wanting to do the same thing with machine learning and we’re really wanting to do the same thing with machine learning and we’re really wanting to space models.

There’s a lot of metrics we can look at that show machine learning models are doing well, but are these the metrics that are most important to users? Therefore, not only do we want to co -develop the actual models with partners, we want to co -develop the benchmarking and the tests that we do on these models. Thank you.

Moderator

Thank you, Beth, for giving your insight into how the National Meteorological Agency’s may plan to use AI to the systems. So now we move towards the technologies so that how do we really create resilient systems for forecasting. So my first question would be to Mr. Som Sasangi. The private sector innovation has advanced rapidly. So there are first foremost question I think which comes to my mind is how can technology providers design AI systems that are interoperable and with sovereign data architectures because that is the crucial issue to be cracked. So we have to design AI systems that are interoperable with sovereign data architectures and compatible with diverse governance ecosystems. For a country like India with federal government and the state governments so this is a very vital nut to be cracked from the technology’s point of view and similarly what standards of explainability are necessary when AI informs life -saving decisions.

Som Satsangi

Thanks Manish. Really a great question and probably in this room I’ll be calling something which is very very important because just when I walked in I heard Mr. Martin and he spoke a couple of points which are so important. and critical for a country like India. He spoke about the government. He spoke about the procurement policies and the scale. So all these three things are so important and critical when we look from India’s standpoint with 1 .2 billion plus citizens. I’ve been the managing director of Hewlett Packard Enterprise for the last nine years, and I know I’ve been involved almost in all large critical infrastructure projects, whether it’s UIDI or any kind of transaction, COVID, all applications.

And we know that all these things we have developed at this scale and delivered. Probably when we look, the most important aspect of the human life with the climate change, the disaster which is happening across the world, the length and breadth of India on coastal side. So, but somehow, are we ready to do it? I don’t think we are ready. While, and I’ll give you some pointer which are very important and why it’s not happening to the Mr. Martin point. And I’ll just, while India has very ambitious plan for national supercomputer mission way back in 2015, where India said, okay, we’ll be investing 4 ,500 crore to develop some of the supercomputer which will be the high class.

But in last 10 year, what we have developed is some 37 supercomputer with just the 40 petaflop of capital. is that sufficient? Now we are planning, okay, we’ll add another 50 petaflop. But when you look at the global level, the kind of infrastructure which is, if we have to manage in a real time this alert and warning system, and I’ll give you one or two examples in United States. The top which have been developed and deployed to do these things, they have got a capacity of almost one to two exaflop. And one exaflop is close to around thousand petaflop. Whole India we don’t have even today 100 petaflop of data. And US, there are multiple systems which provide this real time information and each of them, one is Al Capitan which is 1 .8 exaflop then Frontier system which has been deployed by the Oak Ridge National University has got 1 .3 exaflop.

capacity. Aurora, which is recently deployed by the Argonne National University, has got one extra flop of power and capability. So these are the kind of systems which are deployed so that actually they can take the power of AI in a real -time environment, whether it’s geospatial data or satellite information data or it’s any kind of live information, and analyze these things with the help of AI in a real -time environment and provide the alert much ahead of those things. Somehow we are not able to provide. So in India if we want this early warning system to be done, I think our main focus needs to be how we can have the core infrastructure which will meet the requirement of this.

And last couple of days in every discussion, this is what is coming out with the global CIO and CEO, that probably India need the core infrastructure which we have not developed. Now, we might say, okay, we are doing 10 ,000 to the AI and all, but that is getting distributed to a large number of tech and SMB guys who are developing the application. But what government needs, because it’s a sovereign data, that government need to buy this kind of infrastructure. But I know each of the system will cost anything between 400, 500 million dollars to a billion dollars. Government may not be able to spend that kind of money. So probably that’s a place where private partnership becomes very, very important in cricket.

So my request is that probably department should be looking how the large global institution and technology partner can bring the core infrastructure and technology, because today technology is not a barrier. It’s an infrastructure and the scale and the procurement process and some of these policies. How the various data will be getting integrated is a problem. So if we can address these things, to the scale probably India has done on the DPI side where we have implemented and the best example to the global level where the UIDI is being used almost by almost more than 800 -900 million citizens in the country. We can deliver that. We have got a capability and with all these AI transformation which is happening, our Honorable Prime Minister already said that India is going to be leapfrogging on those things and going to the global leader in the AI space with all those technology embedded along with the capability what India has got.

Only what is required the infrastructure, but infrastructure will come with a huge cost. When you are going to get the infrastructure, another element comes is the power, energy and water. That’s going to be very critical. So somebody has to look at all the three aspects. You can get the infrastructure if you don’t have the power. So we need to have the power which can help and power these kind of systems. So alternative power resources are going to be very, very critical. They’ll be all water -cooled system because they will have hundreds of thousands of GPU and CPUs running together kind of thing. They will require a huge power and huge water capabilities. So need to have that thing.

So India need to start thinking on those lines to create that thing. If we have to protect and we have to get the right early warning alert to save the life of millions of citizens in the country. Thank you.

Moderator

Thank you. And definitely DRR also offers an opportunity for us to ponder. So taking forward from Mr. Som, I’ll go to Mr. Pankaj Shuklaji, the head of customer engineering at Google Cloud AI. Basically, cloud computing and AI platforms enable real analytics, real -time analytics at scale. So what are the critical infrastructure investments which are essential to support AI deployment in low connectivity and high -risk environments? That’s very vital, looking at the geography of our nation. And how can AI -driven dissemination ensure last -mile inclusion while mitigating misinformation risk? So your insights on that.

Pankaj Shukla

Good afternoon, everyone. So irrespective of the technology, when we talk of disaster management and resilience, essentially what we are trying to do is, we are trying to turn the chaotic reality on the ground into actionable intelligence. so for example the data fragmentation which is sitting with multiple ministries and social media various places all across all of that first of all needs to be brought to a place or at least should be you should have an ability to bring all of that data and turn into a living intelligence so once the data is there which is structured as well as unstructured data then we have the ability of our AI models today which are multi -modal to make sense of completely chaotic data noisy data into a real intelligence at unimaginable speed that is essentially what AI is all about so when it comes to the real implementation of this entire architecture and panelists spoke about multiple aspects that how can we use AI how do we actually implement it on ground if you look at what we need and essentially if we talk of AI broadly it has it is it is there at the five layers.

One is the infrastructure layer. Second is the operating system layer which runs on top of infrastructure. I am not talking about just servers and data centers but an operating system layer which scales from a central to an edge location to a multiple regional locations. Then on top of that the services which are required, platform services which are required to basically build the AI applications, make the use of the right models etc. Then you have got the models, the multi -model ability of the models like Gemini and various other hyperscalers which provide and for example the NDIA mission, lot of Indian providers are building models. Ma ‘am spoke about lot of the models which for example other companies are building.

So the question is that how are we able to make the use of all diverse set of models in a dynamic manner and use agentic AI on top of that. To build applications and turn that into a real action. which can be disseminated at the places where we want. For both proactive during the response and as well as after the response. So the question is how do we implement it? So implementation of this will require a framework or an architecture which essentially has a central living intelligence of all the data on which you are basically experimenting and pre -training the models and tuning using different type of models and building applications. The real application of that is going to happen at a place which might get completely disconnected from a central place.

So you should be able to actually build all of these applications, AI applications, make use of that data, make a single central single truth centrally but ability to send that intelligence back to a tactical location. Today it is exactly possible. So organizations for example Google and many other organizations are actually trying to build. Basically bring all the goodness of hyperscaler cloud. for the entire infrastructure and managed services layer as well as AI tooling to on -prem and then ability to also run those in a completely disconnected air gap environment in a zero trust manner. So you have the security of your data and your applications and then also you should have an ability to connect the edge locations in a federated manner to the central place but if required during a disaster you should be able to carry it a rugged device which basically has a basic small set of central intelligence sitting into that with all these necessary models to basically take action on the ground and that action could be related to either finding out where are your assets where is the maximum impact which has happened, how do you actually kind of send the information to various places.

So all of those things are absolutely possible today and while there is huge amount of infrastructure at its own set is required actually to train models, build models. but that’s happening across the country we should have an ability to bring all the good models, the models for the right thing to basically run it on prem at a smaller set of infrastructure and make a smaller set of that which can run in a tactical location which sits possibly in a very limited infrastructure and compute that is what we

Moderator

Thank you Pankaj, I think for giving Google’s insight into building rugged systems at scale deployment of AI solutions in low cost and high risk environments we have another contributor to the DRR particularly AI deployment in DRR could be from the startups and we have Mr. Nikhilesh Kumar CEO and founder from Vassar Labs basically Nikhilesh you can enlighten us about how startups can contribute in developing a DPG at population scale for DRR particularly for countries like India

Nikhilesh Kumar

The modeling layer, which is transforming the data into various insights, hazards. The asset and the people layer, which is getting impacted, where we need to know today in personalized and in precision, where exactly we talked about, like if flood is coming, which road, which houses, if a landslide is coming, which area. And the fourth, which is most important, is the workflows to translate the actions. And this is where we see a role of DPI and DPG, because all these four layers are not done by one person. They are looking to data scattered across various agencies. And we need to have DPIs and DPGs, which are built across this data right from. different institutions which are bringing metrological data, some institutions which are bringing water and other asset related data, some institutions which are creating different layers like survey of India in case of India on earthquake and various other layers.

Now we also see a role of AI today being playing a role and I will just give an example for that. So we see as today extreme events are happening, one of the first pressure is in the water sector where we see extreme floods sudden gush of water coming into the large dams and dams is one of the most extreme and vulnerable asset that we are all impacted to it and we would also see that the large dams are perhaps we have got a good handle to control them during the disaster but we have big number of dams in the country where they are unregulated and they are scattered across in large numbers and there is no forecast available for them.

So how do we churn in real time, in near real time, both hourly and in days time for close to 1 million water bodies which anytime can be vulnerable. So one of the solutions that we recently saw was translating and making leveraging actually the data gap utilizing AI and AI sitting on real time satellite data 30 minutes interval, radar data coming from, currently all these assets are available from IMD and translating that into a nowcast. Now that nowcast layer translated into hydraulics to each of the dams and in cyclone month close to 5000 dams were given in real time. So such use cases you see is where data is getting connected is available in real time in an interoperable format and then there are players who can translate this data into actions.

I see that contributions where such platforms are brought at national and state scale and these use cases are also packaged and made available for different recipient departments to translate them into actions. So I would say that sir and one more thing I would just like to add taking this as a forum sir. Risk assessment and risk reduction both has a very big gap when it comes to data especially various events across take about earthquake, take about other type of disasters where… where parametric measurements historic have not been available and knowing a frequency… of location specific frequency of these hazards has been actually lacking because you don’t have a database. Now AI can play a very good role here where the information is lying in lots of news which has happened and which contains unstructured information on location, unstructured information on the hazard that has impacted the damages.

So AI can actually uncover these informations and create a structured data sets each hazard wise and these will also feed into various DPGs that will further unlock to the insurance sector to make this which will basically benefit from knowing location specific intensity and also frequency of the risks. I will just close with that sir.

Moderator

Thank you Nikhilish. You have aptly summarized that startups in this sector, definitely can play a very vital role particularly for developing a rugged AI systems for India at population scale. Now we have heard the panelists from the Indian perspective since we are running large systems. So we would like also to have all the members to have the benefit of the insights of how the national systems are functioning in India and how technology is basically being deployed at scale for DRR perspectives. So firstly I would like to get insight from Dr. Prithunjay Mahapatra. Yes. DG of IMD is here and IMD can elaborate on how robust AI based systems can be deployed, are being deployed at population scale in the Indian context.

DG of IMD is here and IMD can elaborate on how robust AI based systems can be deployed at population scale in the Indian context. Dr. Bapa.

Dr. Mrutyunjay Mohapatra

Namaskar. Good morning to all of you. Respected Dr. Komal Kishore sir, Adas Nand sir, our Krishnamurthy sir and distinguished panelists and their delegates, friends and colleagues. On the outside, I congratulate NDA. For organizing this session, which has given a lot of thoughts to each of us who are represented here. I’ll just start with what the initiative has been taken up by the UN and the WMO. A clarion call was given in 2022 that early warning for all. And when you go for early warning for all, it includes all the countries, all the people and all sectors, all the strata of the society. So with that, when it came of that, actually less than 50 % of countries had the early warning at that time.

Now the number is increasing. but still the time is short now by 2027 we have to achieve the 100 % what’s the early warning for all it is a long goal and if we review now we find that during this last 5 years there is a huge jump in technology and AI is one such technology which is helping for extension of the early warning for all looking at the various components of that you need first the risk knowledge at each and every point what our friend told us like Nikhil and it is not possible with the existing network of any country to have the risk knowledge at each and every location but at the same time there are unstructured data as it is told which can be utilized to create the knowledge to create the risk hazard vulnerability assessment which can be the historical knowledge which can be utilized in real time when you go for the prediction of any severe weather event Next point comes the early warning.

Yes, over the early warning aspects, you will see that there has been also a huge jump in recent years with the inclusion of many AI -based models. You will find that each and every large NMHS, you can say, established NMHS, they are utilizing AI. IMD is also utilizing AI for taking a decision with respect to the early warning. At the same time, I will tell you, AI has come off as a hybrid model along with the physical models. We cannot get away with the physical models because physical models provide you the physical understanding, the reasoning. And hence, the human knowledge gets into the picture with the help of these physical models. So therefore, AI has to be suitably connected with the physical models.

That’s what everyone is doing, starting with European Center or Indian Center. And to do that also, there’s been many collaborations and integrations towards that. So after that, if you look at the basic backbone, which is the modeling, the modeling starts with the basic assumption that weather forecasting is an initial value problem. You cannot give weather forecast if you do not have what is the initial status of Earth, ocean, and atmosphere. So therefore, the basic thing which we are talking of now, that is already defined in the physical modeling system. The system, unless you improve the data, initial data, with all types of observational tools and techniques, you cannot improve the weather forecast. So therefore, here, by collecting or by creating the data with the help of AI also will go a long way in improving not only the AI models, but also the physical models and the hybrid models.

Once you have the good data, the quality of the data can be improved with the help of AI. There are, I’ll tell you, from satellite we get data, a lot of data, but only five. Five percent of the data from satellite is usable. We cannot use the data from satellites because of the quality. And further is that the quantity. you cannot accommodate all types of data in the physical modeling system as our friends have been telling that you need infrastructure, you do not have a computational infrastructure where you can utilize 100 % satellite data so yes you are true that in India we do not have sufficient computing infrastructure, we have at least now 28 petaflops in IMD and outside of course with National Subcontinent Motion we have come up like that, but that is not sufficient and therefore there is scope for public engagement for the augmenting the computational infrastructure and other digital infrastructures but at the same time there is another scope because of AI, now box model has come up a poor country, a poor small island nation cannot venture or cannot even dream to have a high performance computing system, they can go for AI system, a box model has come up where you can give it a small island nation and there with the help of a few GPU nodes they can have the forecast and that has come up and it will grow gradually and we will have the affordability to early warning with the help of AI.

of this GPU -based AI -driven or data -driven models. So after that comes the forecast. Once you come to forecast, now we have already come up to AI consensus. But physical consensus plus AI consensus, then again you will go for the final forecast. Then finally you go to the sectoral applications. There is a huge scope here with the improvement of economic conditions and societal conditions in every country to improve our decision -making for each and every sector, and there AML can play a role. So I urge upon all the industries, academia, R &D, and think tanks to collaborate with NMHS, especially with the India Metrology Department and other organizations here, to have very authentic, specific, and judicious utilization of AI with limited reasonable resources available in the country.

So thank you very much.

Moderator

Thank you, Dr. Mapatra, for your valuable insight. Now, since NDMA is the APEC’s national body, which basically will integrate… all the varied systems into creating a rugged AI… systems. I would like the entire audience to get the benefit of the vision of NDMA from member and HOD Dr. Krishna Vassa. Sir, can you please elaborate how NDMA intends to take this forward to create a sustainable, low -cost, at -scale model for the country?

Dr. Krishna Vatsa

Thank you very much for giving me this platform. I would like to mention that we already have a huge amount of data that exists in relation to almost all the hazards. Look at the earthquakes. We record all the micro -earthquakes for the entire country. The kind of data that exists for the earthquakes which are below 3 also can give us a very good indication of the kind of earthquakes that we can experience in the Himalayas. and other regions. And this data, the availability of data is going to increase exponentially as we are investing in the observational networks. Almost every mitigation program that we are doing, we have included a significant aspect of early warning systems. In the next five years or so, every village in India will have an automated weather station.

We will have a large number of instrumentation for measuring landslides. We are going to at least quadrupling the seismometers, strong motion accelerographs. So we will be investing a huge amount of money in improving our observational networks across the hazards, which will mean that we will have access. to still a larger amount of data. What is important is that we need to have that capacity to process the data and apply the AI models and improve the precision of early warning. That is an area that is the sphere where we are struggling right now. It’s one thing to set up the observational network. The another thing is to how to collect the data, process the data and generate the information that could be used and more so when it comes to informing the common citizens.

Scientists is one thing. You are getting a huge amount of data, but we are not doing it for the scientists. We are doing it for the people who get affected by disasters. So how do we go about it? And the roadmap is not sufficiently clear and I keep talking to all kinds of people. somebody would come and say that you set up a huge data center. No? Okay, that’s fine, great. But people also say if you are setting up a huge data center and you are not really empowering all the early warning agencies, then how are you going to justify the investment in data centers? The data comes to individual agencies. How do the data center and the individual early warning agencies interact so that we have a good model available?

And we don’t have unlimited resources. So the point is, this is where we need more clarity. How do we go about using our existing networks to improve the precision in early warning risk information through a gradually incremental way of building capacities that of course include the data center that should include improving our connection with the LLM models. But it’s also very, very important that we need to find a way of improving the overall architecture. That is one area where we are struggling and where we need some guidance. Thank you very much.

Moderator

Thank you, sir. I think we are coming to the close of the discussion. I’ll request Krishna Vassa, sir, to please give a moment to our panelists. Just give the back. also then request all our dignitaries in the front row to after this memento is over for a quick photograph and then we vacate the room. Thank you. Thank you. I’ll request the leadership from the states of Tamil Nadu and Andhra Pradesh, Telangana also to come, please, in the front for the photograph. We are very happy to inform that most of the states are also represented from the State Disaster Management Authority. Thank you very much. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“The frequency, intensity and complexity of hazards are rising worldwide due to climate variability, rapid urbanisation and cascading events.”

The moderator’s remarks in the knowledge base explicitly note the global increase in disaster frequency, intensity and complexity, driven by climate variability, urbanisation and cascading hazards [S2].

Confirmedhigh

“A “heartbeat indication” or thermal map could locate people inside a building during a fire, highlighting the potential of sensor‑fusion for life‑saving situational awareness.”

The same question about using heartbeat signals or thermal mapping to determine occupants’ locations in emergencies is recorded in the knowledge base [S1].

Confirmedmedium

“Institutionalising a “digital twin” that maps physical infrastructure into a virtual counterpart accessible to emergency operators creates a bridge between the tangible and digital realms.”

The knowledge base discusses the governance of digital twins and their role in linking physical assets with virtual platforms for coordinated response [S87].

Confirmedhigh

“Any AI‑driven early‑warning system must retain a “human‑in‑the‑loop”; Mauritius’ cell‑broadcast early‑warning system requires human verification before messages are transmitted.”

International guidance emphasises that humans must retain control over AI-enabled decision-making, aligning with Mauritius’ policy of human-verified cell-broadcast alerts [S92].

Additional Contextmedium

“Computer viruses are likened to biological viruses, and infected warning messages could cause disruption.”

The knowledge base provides background on the evolution and threat of computer viruses, underscoring their potential to disrupt digital communications [S90].

Additional Contextmedium

“Malicious code could corrupt warning messages, underscoring the need for robust cybersecurity safeguards.”

Cybersecurity concerns, including data breaches and malicious code affecting critical systems, are highlighted in the knowledge base, supporting the call for strong safeguards [S94].

External Sources (95)
S1
National Disaster Management Authority — – Pankaj Shukla- Nikhilesh Kumar – Pankaj Shukla- Nikhilesh Kumar- Dr. Krishna Vatsa
S2
National Disaster Management Authority — Som Satsangi, former SVP and Managing Director for Hewlett Packard Enterprise India, provided a stark reality check rega…
S3
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — Welcome, sir. He is a key contributor to national strategies for AI in resilient infrastructure and South -South coopera…
S4
National Disaster Management Authority — – Som Satsangi- Dr. Krishna Vatsa
S5
National Disaster Management Authority — defining moment for disaster risk governance. Around the world, the frequency, intensity, and complexity of disasters ar…
S6
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — Welcome, sir. He is a key contributor to national strategies for AI in resilient infrastructure and South -South coopera…
S7
National Disaster Management Authority — – Pankaj Shukla- Nikhilesh Kumar – Pankaj Shukla- Nikhilesh Kumar- Dr. Krishna Vatsa
S8
National Disaster Management Authority — Beth Woodhams from the UK Met Office explained their approach of gradually blending machine learning models with traditi…
S9
National Disaster Management Authority — Speakers:Beth Woodhams, Dr. Mrutyunjay Mohapatra
S10
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S11
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S12
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S13
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — defining moment for disaster risk governance. Around the world, the frequency, intensity, and complexity of disasters ar…
S14
National Disaster Management Authority — Minister Avinash Ramtohul from Mauritius provided a unique perspective by fundamentally expanding the conceptual framewo…
S15
National Disaster Management Authority — – Avinash Ramtohul- Dr. Krishna Vatsa
S16
National Disaster Management Authority — -Dr. Krishna Vatsa- Member and Head of Department, National Disaster Management Authority (NDMA); national disaster mana…
S17
National Disaster Management Authority — -Dr. Mrutyunjay Mohapatra- Director General, India Meteorological Department (IMD); expertise in meteorological services…
S18
National Disaster Management Authority — Dr. Mrutyunjay Mohapatra from India Meteorological Department provided authoritative insights into national-scale implem…
S19
https://app.faicon.ai/ai-impact-summit-2026/national-disaster-management-authority — Disaster can also be the cybersecurity attacks that can actually create havoc in our lives. Therefore, it is very import…
S20
Conversational AI in low income & resource settings | IGF 2023 — Additionally, the potential of AI and chatbots in low-resource settings is acknowledged. The analysis suggests that thes…
S21
AI Without the Cost Rethinking Intelligence for a Constrained World — Summary:While both speakers critique current GPU-centric approaches, they differ on solutions. Bernie advocates for movi…
S22
Driving Indias AI Future Growth Innovation and Impact — Yeah, actually, I think across many other countries that we have seen, India has got a much comprehensive approach to th…
S23
India’s AI Future Sovereign Infrastructure and Innovation at Scale — When you look at sovereign and I think Minister of Electronics and IIT Vaishnavji, he was mentioning. The. Mr. talking t…
S24
From KW to GW Scaling the Infrastructure of the Global AI Economy — I think whether we like it or not, the speed of change in the semiconductor IT AI world is very different than the speed…
S25
Inclusive AI_ Why Linguistic Diversity Matters — Evidence:Singh outlined the five layers: energy, data centers/infrastructure, chips, models, and applications. He noted …
S26
Types of diplomacy — Metaverse diplomacyisthepracticeofengagingindiplomaticactivitiesandnegotiationsinvirtualworlds,suchasvideogames,virtualr…
S27
Pre 9: Discussion on the outcomes of the Global Multistakeholder High Level Conference on Governance of Web 4.0 and Virtual Worlds — Ruta Gabalina: Please, Ruta, the floor is yours. Thank you for that introduction and a very good pronunciation. My surna…
S28
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Maintain humans in the loop during the transition period while behavior change occurs gradually
S29
Leaders TalkX: When Policy Meets Progress: Shaping a Fit for Future Digital World — At the heart of enabling this digital upheaval, the speaker points out the criticality of a facilitative regulatory envi…
S30
AI may reshape weather and climate modelling — The UK’s Met Office has laid out a strategicplanfor integrating AI, specifically machine learning (ML), with traditional…
S31
Building the Next Wave of AI_ Responsible Frameworks & Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S32
WS #49 Benefit everyone from digital tech equally & inclusively — – The need for national platforms to coordinate disaster risk reduction efforts. 3. Information Governance for Disaster…
S33
Secure Finance Risk-Based AI Policy for the Banking Sector — It calls for institutional mechanisms that allow individuals to seek clarification and redress where automated decisions…
S34
AI as critical infrastructure for continuity in public services — Building confidence and security in the use of ICTs | Artificial intelligence | Data governance Resilience, data contro…
S35
WS #187 Bridging Internet AI Governance From Theory to Practice — Hadia Elminiawi: Thank you. Thank you so much. And I’m happy to be part of this very important discussion. So let me fir…
S36
The Power of the Commons: Digital Public Goods for a More Secure, Inclusive and Resilient World — Eileen Donahoe: Great. First, let me congratulate the organizers here. This is a really remarkable event and it’s a ver…
S37
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S38
Is the AI bubble about to burst? Five causes and five scenarios — Centralised, closed platforms vs. decentralised, open ecosystems. Historically,open systems often win in the long run– …
S39
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Summary:While both speakers agree on moving away from purely centralized approaches, Malladi advocates for a comprehensi…
S40
Democratizing AI Building Trustworthy Systems for Everyone — “I think open source is going to be in my mind a critical aspect of it”[32]. “Sustainability also requires these kinds o…
S41
From KW to GW Scaling the Infrastructure of the Global AI Economy — The technical discussions revealed the massive scale of transformation required for AI infrastructure. Projections sugge…
S42
Regional Leaders Discuss AI-Ready Digital Infrastructure — Arndt Husar emphasizes that digital infrastructure must be addressed through three inter‑linked pillars – Solutions, Sta…
S43
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — “as we go from one gig to nine to ten gig … land water and power …”[30]. “defining India’s access to compute, access…
S44
Building the Next Wave of AI_ Responsible Frameworks & Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S45
AI Without the Cost Rethinking Intelligence for a Constrained World — Summary:While both speakers critique current GPU-centric approaches, they differ on solutions. Bernie advocates for movi…
S46
From KW to GW Scaling the Infrastructure of the Global AI Economy — Cherukuri advocates for a shift from rack-level thinking to pod and data hall level density planning. This approach invo…
S47
Advanced Computing Portugal 2030 ACP.2030 — In the HPC landscape there are different types of systems and solutions, with a clear distinction being drawn between th…
S48
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: Thank you for convening this and bringing this very, very important subject at FORC, like how do we bala…
S49
Press Conference: Closing the AI Access Gap — An important aspect of the alliance’s work is the creation of relevant international frameworks and public-private partn…
S50
Open Forum #33 Building an International AI Cooperation Ecosystem — – Qi Xiaoxia- Dai Wei- Ricardo Pelayo Development | Economic | Capacity development Innovation Ecosystems and Practica…
S51
African Union (AU) Data Policy Framework — Data access and accessibility are understood both in terms of reactive forms of access facilitated by laws and regulatio…
S52
WS #97 Interoperability of AI Governance: Scope and Mechanism — How to balance data sovereignty concerns with the need for global interoperability
S53
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Policy interoperability is crucial since global regime is unrealistic and inappropriate given different cultural context…
S54
AI and Digital in 2023: From a winter of excitement to an autumn of clarity — At thetechnical level, data needs standards in order to be interoperable. Here, the work of standardisation and technica…
S55
National Disaster Management Authority — Explanation:Ramtohul uniquely expanded the disaster management scope to include cybersecurity threats in the virtual wor…
S56
Parallel Session A9: Climate Change Adaptation, Resilience-Building and DRR for Ports (continued) — Jair Torres:Thank you, Regina. I think that you have already summarized most of the elements that are here. I believe th…
S57
The sTaTe of The — The Red Cross and Red Crescent Movement has developed guidelines for the facilitation and regulation of international di…
S58
National Disaster Management Authority — There was unexpected consensus on expanding the traditional definition of disasters to include cyber threats. This repre…
S59
Webinar :Using current and emerging cyber tools for disaster management in Africa — Effective disaster management leads to less disruption to health systems. It consistently conveys a positive outlook to…
S60
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Humans must remain in the loop as behavior change takes time but is possible
S61
The Virtual Worlds we want: Governance of the future web | IGF 2023 Open Forum #45 — Data ownership is considered key to the development of generative AI and can contribute to the overall advancement of th…
S62
Lightning Talk #143 Fundamental Rights in Metaverse — Success in developing effective metaverse governance will require continued dialogue among legal experts, technologists,…
S63
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Maintain humans in the loop during the transition period while behavior change occurs gradually
S64
Types of diplomacy — Metaverse diplomacyisthepracticeofengagingindiplomaticactivitiesandnegotiationsinvirtualworlds,suchasvideogames,virtualr…
S65
National Disaster Management Authority — Beth Woodhams from the UK Met Office provided crucial insights into how national meteorological agencies can integrate A…
S66
National Disaster Management Authority — Beth Woodhams from the UK Met Office explained their approach of gradually blending machine learning models with traditi…
S67
AI may reshape weather and climate modelling — The UK’s Met Office has laid out a strategicplanfor integrating AI, specifically machine learning (ML), with traditional…
S68
Lightning Talk #246 AI for Sustainable Development Public Private Sector Roles — You argues that AI has the potential to accelerate sustainable development, but this requires addressing key challenges …
S69
Building the Next Wave of AI_ Responsible Frameworks & Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S70
Building Climate-Resilient Systems with AI — This cuts through the technical optimism to identify the fundamental human and institutional barriers that could prevent…
S71
Survival Tech Harnessing AI to Manage Global Climate Extremes — The shift from traditional weather prediction to decision-support systems, combined with the integration of human behavi…
S72
AI as critical infrastructure for continuity in public services — Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational an…
S73
SenseNova 5.0: Advancing AI capabilities and industry reach with ‘Cloud-To-Edge’ technology — On 23 April 2024,SenseTimereleased the latest update of its large language model (LLM),SenseNova 5.0, during its Tech Da…
S74
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — “I’m asking a suggestion from you, so like what model should, like someone who’s creating such solution for voice and tr…
S75
The Power of the Commons: Digital Public Goods for a More Secure, Inclusive and Resilient World — Eileen Donahoe: Great. First, let me congratulate the organizers here. This is a really remarkable event and it’s a ver…
S76
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S77
https://app.faicon.ai/ai-impact-summit-2026/national-disaster-management-authority — The modeling layer, which is transforming the data into various insights, hazards. The asset and the people layer, which…
S78
Digital Public Goods and the Challenges with Discoverability | IGF 2023 — Parallel to this trend of synergy and cooperation, countries are increasingly endeavouring to construct their local vend…
S79
Evolving Threat of Poor Governance / DAVOS 2025 — Pressing Governance Risks Tarek Fares Kai: Good afternoon, everybody. I’m Tariq Kaye. I’m a senior reporter at France2…
S80
The Global Power Shift India’s Rise in AI & Semiconductors — And that is the journey where you are. really contributing to the first -of -a -kind or the nth -of -a -kind leading -ed…
S81
Driving Indias AI Future Growth Innovation and Impact — But there was also a lot of fear around AI about trust factors, about privacy, data, sovereignty, multiple issues about …
S82
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Namahit rangi, namahit te whanau, namahit kia koutou. I greet the sky, I greet the earth, and Your Excellencies, I greet…
S83
AI for agriculture Scaling Intelegence for food and climate resiliance — This discussion focused on the integration of artificial intelligence in agriculture to enhance food security and climat…
S84
Mauritius Artificial Intelligence Strategy — –  Mr. Georges Chung-Tick-Kan, Senior Adviser, Prime Minister’s Office (Chairman) –  Mr. Anandsingh Acharuz, Director,…
S85
The future of Digital Public Infrastructure for environmental sustainability — Robert Opp:Thank you, David. It’s a pleasure to be here. Thanks for the invitation. I have just a few slides. But maybe …
S86
[Tentative Translation] — 103 DIAS: Data Integration and Analysis System –  In order to construct collaborative data platforms in the infrastruc…
S87
Part 7: ‘Converging realities: Embedding governance through digital twins’ — Digital twin governance begins at the intersection of technical design and responsibility. To function effectively withi…
S88
Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159 — Masami Ishiyama:Thank you. This is Masami from Microsoft Japan. So I’m going to introduce the Microsoft Sustainability I…
S89
The Power of Satellites in Emergency Alerting and Protecting Lives — Mathieu highlighted a specific need: “Collaboration between Earth observation and connectivity communities is needed to …
S90
The history of computer viruses: Journey back to where it all began! — Once confined to the realms of theoretical science and speculative fiction, computer viruses have morphed into one of th…
S91
Cognitive Vulnerabilities: Why Humans Fall for Cyber Attacks — Furthermore, there is a need for workplaces to promote caution and awareness towards potential cybersecurity threats, pa…
S92
9821st meeting — Humans must always retain control over decision-making functions guided by international law, including international hu…
S93
Securing Tomorrow: Building Resilience Through Education — Furthermore, Mr. Albanyan emphasizes the role of families and the community in developing responsible online behavior. H…
S94
Rule of Law for Data Governance | IGF 2023 Open Forum #50 — This provides a strong foundation for data protection. However, Mexico still faces difficulties in data handling. Data b…
S95
Promoting the Digital Emblem | IGF 2023 Open Forum #16 — Koichiro Komiyama, a prominent individual in the field, has expressed concerns about cybersecurity threats specifically …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Avinash Ramtohul
1 argument153 words per minute918 words358 seconds
Argument 1
Bridge physical‑virtual worlds & human‑in‑the‑loop (Avinash Ramtohul)
EXPLANATION
Ramtohul argues that disaster risk management must extend beyond physical hazards to include threats in the virtual world, such as cyber‑attacks, and that a digital twin linking physical and digital spaces is essential. He stresses that critical decisions should retain a human‑in‑the‑loop to prevent fully automated actions that could cause harm.
EVIDENCE
He describes the existence of a virtual world that is larger than the physical one and notes that disasters can strike it, citing cybersecurity attacks as examples of virtual disasters [26-33]. He illustrates a scenario where a fire in a building requires a digital twin to provide emergency services with structural plans, location of people, and utility layouts, and suggests using heartbeat or thermal maps to pinpoint occupants [35-42]. He warns that AI-driven alerts could be compromised by computer viruses and therefore emphasizes the need for human verification and a human-in-the-loop decision process [45-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel highlighted Ramtohul’s expansion of disaster management to include cyber-threats in the virtual world, noting the need for digital twins and human verification, as documented in [S1] and [S2]; the role of cybersecurity attacks as disasters is further discussed in [S19].
MAJOR DISCUSSION POINT
Linking physical and virtual disaster response with human oversight
M
Moderator
1 argument87 words per minute1167 words797 seconds
Argument 1
Institutionalize AI within national resilience architecture and avoid alert fatigue (Moderator)
EXPLANATION
The moderator stresses that AI must be embedded within a national resilience architecture that is scalable, secure against cyber threats, and designed to deliver meaningful alerts without overwhelming users.
EVIDENCE
He highlights the need for resilient governance frameworks that are scalable across the nation, resilient to cyber attacks, sustainable, and that avoid alert fatigue by ensuring alerts are meaningful and not excessive [56-57].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The moderator’s call to embed AI within a scalable, secure national resilience framework and to prevent alert fatigue is emphasized in [S2].
MAJOR DISCUSSION POINT
Embedding AI in governance while preventing alert overload
AGREED WITH
Avinash Ramtohul
B
Beth Woodhams
1 argument154 words per minute385 words149 seconds
Argument 1
Develop hybrid AI‑physics models, incremental blending, and joint benchmarking (Beth Woodhams)
EXPLANATION
Woodhams explains that the Met Office will not replace physical weather models with AI but will gradually blend machine‑learning outputs with physics‑based forecasts, co‑developing models and benchmarking standards with partners.
EVIDENCE
She states that the Met Office is creating machine-learning weather models and plans to implement them step-by-step through hybrid blending of physics and AI outputs, emphasizing incremental confidence building [65-71]. She adds that co-development with partners such as India is ongoing, and that joint benchmarking will ensure metrics align with user needs [72-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Woodhams’ approach of complementing physics-based forecasts with machine-learning models, incremental blending, and joint benchmarking is confirmed in [S1].
MAJOR DISCUSSION POINT
Hybrid AI‑physics forecasting with collaborative benchmarking
D
Dr. Mrutyunjay Mohapatra
2 arguments171 words per minute982 words344 seconds
Argument 1
Combine AI with physical models to improve data quality and forecast accuracy, while addressing compute limits (Dr. Mrutyunjay Mohapatra)
EXPLANATION
Mohapatra argues that AI should complement, not replace, physical models, improving data quality and forecast skill, but notes India’s current high‑performance computing capacity is insufficient for full AI integration.
EVIDENCE
He notes that AI is being used alongside physical models to enhance early warning, and that only about five percent of satellite data is usable, highlighting the need for better data quality through AI [189-197][202-207]. He points out that India’s computing infrastructure (around 28 petaflops) falls short of the exaflop-scale needed for real-time AI analytics, referencing US systems with 1-2 exaflops capacity [208-210].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Only about five percent of satellite data is usable and AI can improve its quality, while computing capacity constraints are noted as challenges, as outlined in [S2].
MAJOR DISCUSSION POINT
Hybrid AI‑physical modeling constrained by computing resources
AGREED WITH
Som Satsangi, Dr. Krishna Vatsa
Argument 2
Leverage GPU‑based “AI‑box” models for low‑resource settings to supplement limited HPC (Dr. Mrutyunjay Mohapatra)
EXPLANATION
He proposes that small‑scale, GPU‑based AI “box” models can provide affordable forecasting capability for low‑resource countries, bypassing the need for massive supercomputers.
EVIDENCE
He describes a “box model” where a few GPU nodes can deliver forecasts for small island nations, offering a cost-effective alternative to large HPC installations [207-208].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The potential of AI solutions in low-resource environments and critiques of GPU-centric approaches provide context for AI-box models, discussed in [S20] and [S21].
MAJOR DISCUSSION POINT
GPU‑based AI boxes as low‑cost forecasting tools
DISAGREED WITH
Som Satsangi
S
Som Satsangi
1 argument147 words per minute983 words398 seconds
Argument 1
Build sovereign data architectures and acquire exaflop‑scale supercomputing, power and cooling resources (Som Satsangi)
EXPLANATION
Satsangi stresses that India needs sovereign, high‑performance computing infrastructure at exaflop scale, together with reliable power and cooling, and that private‑public partnerships are essential given the high costs.
EVIDENCE
He compares India’s current supercomputing capacity (≈40-100 petaflops) with US exaflop systems such as Al Capitan (1.8 exaflop) and Frontier (1.3 exaflop), showing a large gap [96-100]. He notes that acquiring such systems would cost US$400-1000 million, which may be beyond government budgets, highlighting the need for private partnership [106-108]. He also mentions challenges of integrating sovereign data across agencies as a barrier to deployment [111-113].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s current petaflop capacity versus exaflop systems, cost estimates, and the need for private-public partnerships are detailed in [S2]; the mismatch between compute growth and power/cooling requirements is highlighted in [S24].
MAJOR DISCUSSION POINT
Need for exaflop supercomputing, power, cooling, and private partnership
AGREED WITH
Pankaj Shukla
DISAGREED WITH
Pankaj Shukla
P
Pankaj Shukla
1 argument161 words per minute761 words283 seconds
Argument 1
Implement a five‑layer architecture (infrastructure, OS, services, models, agents) enabling central‑to‑edge intelligence and zero‑trust operation (Pankaj Shukla)
EXPLANATION
Shukla outlines a five‑layer AI architecture that spans infrastructure, an operating‑system layer, platform services, multi‑modal models, and agentic AI, allowing a central “living intelligence” to feed edge or rugged devices securely, even in air‑gap conditions.
EVIDENCE
He describes the five layers-infra, OS, services, models, agents-and explains how a central data hub can train and tune models, then push intelligence to disconnected edge locations via zero-trust, air-gap capable rugged devices that can operate during disasters [136-152].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A five-layer AI stack covering infrastructure to agentic AI aligns with Shukla’s proposal and is described in [S23] and [S25].
MAJOR DISCUSSION POINT
Five‑layer AI stack for edge‑centric, secure disaster analytics
AGREED WITH
Som Satsangi
N
Nikhilesh Kumar
1 argument128 words per minute627 words293 seconds
Argument 1
Create DPIs/DPGs that integrate multi‑agency data, provide real‑time dam nowcasting, and extract structured risk data from unstructured sources (Nikhilesh Kumar)
EXPLANATION
Kumar proposes building Disaster‑Preparedness Interfaces (DPIs) and Platforms (DPGs) that combine data from many agencies, deliver real‑time dam nowcasts, and use AI to turn unstructured news into structured hazard databases.
EVIDENCE
He outlines four layers-modeling, asset/people, workflows, and DPI/DPG integration-showing how AI can fuse meteorological, water-resource, and asset data to nowcast ~5,000 dams in near real-time using 30-minute satellite and radar feeds [155-166]. He also explains that AI can mine unstructured news reports to create structured hazard datasets, supporting insurance and risk assessment [169-171].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Real-time dam nowcasting and AI-driven extraction of hazard data from news reports support Kumar’s DPI/DPG concept, as mentioned in [S2].
MAJOR DISCUSSION POINT
Integrated DPI/DPG platforms with AI‑driven dam nowcasting and data extraction
AGREED WITH
Som Satsangi, Pankaj Shukla
D
Dr. Krishna Vatsa
1 argument126 words per minute507 words240 seconds
Argument 1
Scale up weather stations, seismometers, and landslide sensors; develop capacity to process the surge of data and connect data centres with early‑warning agencies (Dr. Krishna Vatsa)
EXPLANATION
Vatsa outlines India’s plan to massively expand observational networks—automated weather stations in every village, more seismometers, and landslide sensors—while emphasizing the need for data‑processing capacity and clear integration between data centres and early‑warning agencies.
EVIDENCE
He notes that India already has extensive hazard data and will quadruple seismometers, install automated weather stations in every village, and increase landslide instrumentation, leading to a large data influx [220-229]. He then stresses the current struggle to process this data, the need for data-centre capacity, and the lack of a clear roadmap for linking data centres with early-warning agencies [230-247].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The expansion of observational networks, the massive existing hazard datasets, and the challenges of data processing and data-centre to early-warning agency integration are discussed in [S2].
MAJOR DISCUSSION POINT
Observational network expansion and data‑processing architecture
AGREED WITH
Som Satsangi, Dr. Mrutyunjay Mohapatra
Agreements
Agreement Points
Hybrid AI‑physics models are essential for improving disaster forecasts
Speakers: Beth Woodhams, Dr. Mrutyunjay Mohapatra
Develop hybrid AI–physics models, incremental blending, and joint benchmarking (Beth Woodhams) Combine AI with physical models to improve data quality and forecast accuracy, while addressing compute limits (Dr. Mrutyunjay Mohapatra)
Both speakers stress that AI should complement, not replace, physics-based weather and climate models, using hybrid or blended approaches to raise confidence and forecast skill while acknowledging compute constraints [65-71][189-197].
AI‑driven alerts must retain human oversight to avoid over‑automation and alert fatigue
Speakers: Avinash Ramtohul, Moderator
Bridge physical–virtual worlds & human–in–the–loop (Avinash Ramtohul) Institutionalize AI within national resilience architecture and avoid alert fatigue (Moderator)
Both emphasize that critical decision-making in disaster response should involve a human-in-the-loop and that AI systems need to be designed to prevent excessive, meaningless alerts that could overwhelm users [45-46][56-57].
A sovereign, interoperable data architecture that integrates multi‑agency data is required for AI‑enabled DRR
Speakers: Som Satsangi, Pankaj Shukla, Nikhilesh Kumar
Build sovereign data architectures and acquire exaflop‑scale supercomputing, power and cooling resources (Som Satsangi) Implement a five‑layer architecture (infrastructure, OS, services, models, agents) enabling central‑to‑edge intelligence and zero‑trust operation (Pankaj Shukla) Create DPIs/DPGs that integrate multi‑agency data, provide real‑time dam nowcasting, and extract structured risk data from unstructured sources (Nikhilesh Kumar)
All three call for a national data framework that is sovereign, interoperable and capable of feeding AI models across central and edge locations, bringing together meteorological, water-resource and asset data for real-time decision support [111-113][136-152][155-166].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with calls for interoperable data standards and sovereign data governance in AI policy, as highlighted in the African Union Data Policy Framework and discussions on balancing data sovereignty with global interoperability [S51][S52][S53][S54].
India currently lacks sufficient high‑performance computing, power and cooling capacity to run AI at the required scale
Speakers: Som Satsangi, Dr. Mrutyunjay Mohapatra, Dr. Krishna Vatsa
Build sovereign data architectures and acquire exaflop‑scale supercomputing, power and cooling resources (Som Satsangi) Combine AI with physical models to improve data quality and forecast accuracy, while addressing compute limits (Dr. Mrutyunjay Mohapatra) Scale up weather stations, seismometers, and landslide sensors; develop capacity to process the surge of data and connect data centres with early‑warning agencies (Dr. Krishna Vatsa)
The speakers agree that India’s existing petaflop-scale infrastructure, power supply and cooling are inadequate for real-time AI analytics, and that substantial investment in exaflop-class supercomputers, energy and water-cooling, and data-processing capacity is needed [96-100][106-108][208-210][202-207][230-247].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent assessments project India’s AI compute capacity to reach 10-12 GW within three years, underscoring current gaps in HPC, power and cooling infrastructure and the need for scaling [S41][S43][S48][S44].
Public‑private partnerships are essential to deliver the required AI infrastructure and services
Speakers: Som Satsangi, Pankaj Shukla
Build sovereign data architectures and acquire exaflop‑scale supercomputing, power and cooling resources (Som Satsangi) Implement a five‑layer architecture (infrastructure, OS, services, models, agents) enabling central‑to‑edge intelligence and zero‑trust operation (Pankaj Shukla)
Both highlight that the scale and cost of AI infrastructure exceed government budgets alone, making collaboration with global technology firms and private investors crucial [106-108][150-152].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple policy dialogues emphasize PPPs as a cornerstone for AI infrastructure development and governance, including the AI Access Gap alliance and international AI cooperation forums [S49][S50][S44].
Similar Viewpoints
Both advocate a hybrid approach where AI augments traditional physics‑based forecasting rather than replacing it, stressing incremental confidence building and the need for better data quality [65-71][189-197].
Speakers: Beth Woodhams, Dr. Mrutyunjay Mohapatra
Develop hybrid AI–physics models, incremental blending, and joint benchmarking (Beth Woodhams) Combine AI with physical models to improve data quality and forecast accuracy, while addressing compute limits (Dr. Mrutyunjay Mohapatra)
Both underline the necessity of keeping humans in the decision loop for AI‑driven early‑warning systems to prevent fully automated, potentially harmful actions and to keep alerts meaningful [45-46][56-57].
Speakers: Avinash Ramtohul, Moderator
Bridge physical–virtual worlds & human–in–the–loop (Avinash Ramtohul) Institutionalize AI within national resilience architecture and avoid alert fatigue (Moderator)
All three stress the creation of a sovereign, interoperable data ecosystem that can feed AI models from central hubs to edge devices, ensuring secure, federated operation across agencies [111-113][136-152][155-166].
Speakers: Som Satsangi, Pankaj Shukla, Nikhilesh Kumar
Build sovereign data architectures and acquire exaflop‑scale supercomputing, power and cooling resources (Som Satsangi) Implement a five‑layer architecture … (Pankaj Shukla) Create DPIs/DPGs that integrate multi‑agency data … (Nikhilesh Kumar)
Unexpected Consensus
Recognition that power and cooling infrastructure are as critical as computing power for AI‑driven DRR
Speakers: Som Satsangi, Dr. Krishna Vatsa
Build sovereign data architectures and acquire exaflop‑scale supercomputing, power and cooling resources (Som Satsangi) Scale up weather stations, seismometers, and landslide sensors; develop capacity to process the surge of data and connect data centres with early‑warning agencies (Dr. Krishna Vatsa)
While Som focuses on supercomputing hardware, Dr. Vatsa, whose primary concern is observational networks, also highlights the need for reliable power and cooling to run the resulting data-processing facilities-an alignment not obvious given their different domains [111-113][230-247].
POLICY CONTEXT (KNOWLEDGE BASE)
Energy and cooling constraints are repeatedly cited as limiting factors for AI deployment, with reports on India’s power needs and broader sustainability concerns in AI policy [S41][S43][S48].
Overall Assessment

There is strong convergence among the panelists on four core themes: (1) hybrid AI‑physics forecasting, (2) human‑in‑the‑loop governance to avoid alert fatigue, (3) the necessity of a sovereign, interoperable data architecture that links multiple agencies, and (4) the urgent need for massive compute, power and cooling infrastructure, best delivered through public‑private partnerships.

High consensus – the shared viewpoints cut across ministries, academia, private sector and startups, indicating a unified call for coordinated policy reforms, investment in infrastructure, and collaborative data governance to operationalise AI for disaster risk reduction.

Differences
Different Viewpoints
Scale of computing infrastructure needed for AI-driven DRR
Speakers: Som Satsangi, Dr. Mrutyunjay Mohapatra
Build sovereign data architectures and acquire exaflop‑scale supercomputing, power and cooling resources (Som Satsangi) Leverage GPU‑based “AI‑box” models for low‑resource settings to supplement limited HPC (Dr. Mrutyunjay Mohapatra)
Som argues that India must invest in massive exaflop-scale supercomputers, with dedicated power and cooling, and seek private-public partnerships because current petaflop capacity is far below what is needed for real-time AI analytics [96-100][106-108]. Mohapatra counters that small-scale GPU “box” models can deliver affordable forecasting for low-resource countries, reducing the need for such large HPC installations [207-208].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on scaling AI infrastructure from kilowatt to gigawatt levels illustrate the magnitude of compute required for disaster risk reduction, as detailed in scaling studies and comparisons of centralized versus distributed models [S41][S47].
Centralised supercomputing versus decentralised edge‑centric AI architecture
Speakers: Som Satsangi, Pankaj Shukla
Build sovereign data architectures and acquire exaflop‑scale supercomputing, power and cooling resources (Som Satsangi) Implement a five‑layer architecture (infrastructure, OS, services, models, agents) that enables central‑to‑edge intelligence, zero‑trust and air‑gap operation (Pankaj Shukla)
Som emphasises a top-down approach centred on a single, massive high-performance computing platform to meet AI demands [96-100][106-108]. Shukla proposes a layered stack where a central “living intelligence” can feed rugged, disconnected edge devices, allowing AI analytics to function even without continuous central compute [136-152]. The two visions differ on whether AI for DRR should rely on a central exaflop hub or a distributed edge-centric system.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors broader AI ecosystem trends, where open, decentralized approaches are advocated alongside hybrid models that retain data centers, reflecting positions from AI bubble analyses and heterogeneous compute sessions [S38][S39][S45][S47].
Approach to data governance and sovereignty
Speakers: Som Satsangi, Beth Woodhams
Build sovereign data architectures and address integration challenges across agencies (Som Satsangi) Co‑develop models and benchmarking standards with partners, emphasizing shared development rather than strict data sovereignty (Beth Woodhams)
Som stresses the need for a sovereign data architecture that keeps data under national control and highlights integration difficulties [111-113]. Beth, by contrast, advocates collaborative co-development of AI models and joint benchmarking with international partners such as India, implying a more open data sharing framework [71-73]. This reflects a tension between national data sovereignty and collaborative, interoperable development.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy frameworks stress the need to balance national data sovereignty with interoperable standards, as outlined in AI governance workshops and policy roadmaps [S52][S53][S51].
Scope of disaster risk management – inclusion of cyber‑threats
Speakers: Avinash Ramtohul, Other panelists (Beth Woodhams, Som Satsangi, Pankaj Shukla, Dr. Mohapatra, Dr. Vatsa)
Bridge physical‑virtual worlds & human‑in‑the‑loop; include cyber‑attacks as virtual disasters (Avinash Ramtohul) Focus on physical hazards (weather, floods, earthquakes) and AI for forecasting without explicit mention of cyber‑disasters (other speakers)
Avinash expands DRR to the virtual world, arguing that cybersecurity attacks are also disasters and that digital twins should link physical and virtual spaces [26-33][45-46]. The remaining speakers discuss AI for physical hazards (weather forecasting, early warning, infrastructure) and do not address cyber-threats, revealing an unexpected divergence in the definition of disaster risk.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent national disaster management authority statements have expanded the DRR scope to incorporate cyber threats, marking a shift in disaster definitions [S55][S58].
Unexpected Differences
Inclusion of cyber‑disasters in DRR scope
Speakers: Avinash Ramtohul, Other panelists
Bridge physical‑virtual worlds & human‑in‑the‑loop; include cyber‑attacks as virtual disasters (Avinash Ramtohul) Focus exclusively on physical hazards and AI for forecasting without addressing cyber‑threats (other speakers)
Avinash uniquely frames disaster risk management to include virtual world threats such as cyber-attacks, advocating for digital twins and human verification to guard against malicious AI alerts [26-33][45-46]. The rest of the panel concentrates on physical hazards (weather, floods, earthquakes) and does not discuss cyber-risk, revealing an unexpected divergence in the definition of disaster risk.
POLICY CONTEXT (KNOWLEDGE BASE)
Similar to the previous point, the inclusion of cyber-disasters is supported by emerging guidance and webinars emphasizing cyber tools in disaster management [S55][S58][S59].
Centralised exaflop supercomputing vs low‑cost GPU‑box solutions
Speakers: Som Satsangi, Dr. Mrutyunjay Mohapatra
Need exaflop‑scale supercomputing, power and cooling, with private‑public partnerships (Som Satsangi) Advocate GPU‑based “AI‑box” models for affordable forecasting in low‑resource settings (Dr. Mohapatra)
While both acknowledge compute constraints, Som pushes for massive, costly central infrastructure, whereas Mohapatra proposes a radically different, low-cost approach using small GPU clusters, an unexpected contrast in perceived solutions to the same problem [96-108][207-208].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between massive centralized supercomputers and more affordable GPU-box or edge solutions is highlighted in analyses of AI infrastructure models and critiques of GPU-centric approaches [S38][S45][S46][S47].
Overall Assessment

The panel shows consensus on the need to embed AI within DRR and to improve early‑warning accuracy, but diverges sharply on the technical pathway: Som advocates large, sovereign, exaflop‑scale supercomputing and strict data sovereignty; Mohapatra and Pankaj promote decentralized, low‑cost GPU boxes and edge‑centric architectures; Avinash uniquely expands DRR to include cyber‑threats, a perspective not shared by others. These disagreements reflect differing priorities between centralised national capacity building, private‑sector partnership models, and broader definitions of disaster risk.

High – the contrasting visions on infrastructure scale, data governance, and the scope of disaster risk (physical vs virtual) could impede coordinated policy formulation unless reconciled. The implications are that without alignment, investments may be fragmented, and AI‑enabled DRR systems could suffer from mismatched standards, insufficient compute, or gaps in addressing cyber‑risk.

Partial Agreements
All three agree that AI should complement physical models to enhance early‑warning systems and that improved data quality is essential. However, Beth proposes incremental blending and joint benchmarking, Mohapatra stresses hybrid models limited by current compute, while Som calls for massive new supercomputing capacity to realise the vision [65-71][189-197][96-100].
Speakers: Beth Woodhams, Dr. Mrutyunjay Mohapatra, Som Satsangi
Develop hybrid AI‑physics forecasting models to improve early warning (Beth Woodhams) Combine AI with physical models to improve data quality and forecast accuracy, but note compute limits (Dr. Mohapatra) Invest in high‑performance computing infrastructure to enable AI‑enhanced early warning (Som Satsangi)
Both aim to scale AI‑enabled DRR across India. Pankaj focuses on the software/architectural stack to deliver intelligence to the field, while Vatsa concentrates on expanding sensor networks and building the data‑processing backbone needed to feed such architectures. Their goals align, but they differ on whether the priority is architectural design or sensor/network expansion [136-152][220-229].
Speakers: Pankaj Shukla, Dr. Krishna Vatsa
Implement a five‑layer AI architecture enabling central‑to‑edge intelligence and rugged devices (Pankaj Shukla) Scale up observational networks (weather stations, seismometers, landslide sensors) and develop data‑processing capacity (Dr. Vatsa)
Takeaways
Key takeaways
AI must be institutionalized within national resilience architecture, linking the physical and virtual worlds and retaining a human‑in‑the‑loop for critical decisions. Hybrid AI‑physics models are preferred; incremental blending and joint benchmarking between AI and traditional models will build trust and improve forecast lead‑times. India faces a major infrastructure gap – exaflop‑scale supercomputing, power, and cooling are needed – but GPU‑based “AI‑box” solutions can provide interim capability for low‑resource settings. A five‑layer cloud‑edge architecture (infrastructure, OS, services, models, agents) enables central intelligence to operate in disconnected, high‑risk environments with zero‑trust security. Start‑ups can add value by creating Disaster‑Prediction‑Interfaces (DPIs) and Disaster‑Prediction‑Generators (DPGs) that integrate multi‑agency data, deliver real‑time dam now‑casting, and convert unstructured news into structured risk datasets. National observational networks (weather stations, seismometers, landslide sensors) are being expanded; however, capacity to process the resulting data and integrate data‑centres with early‑warning agencies remains a bottleneck.
Resolutions and action items
Adopt policy reforms that mandate digital twins, cell‑broadcast early‑warning systems, and human verification of AI‑generated alerts (proposed by Mauritius Minister). Co‑develop hybrid AI‑physics forecasting models and shared benchmarking protocols between the UK Met Office and partner agencies, including India (proposed by Beth Woodhams). Explore public‑private partnerships to acquire or co‑locate exaflop‑scale supercomputing resources and associated power/cooling infrastructure for national AI‑driven DRR (suggested by Som Satsangi). Implement the five‑layer cloud‑edge architecture to create a central “living intelligence” that can be federated to edge or rugged devices for on‑ground decision support (outlined by Pankaj Shukla). Scale up startup‑driven DPI/DPG platforms (e.g., Vassar Labs dam now‑casting) to national and state levels, and use AI to extract structured hazard data from unstructured sources (suggested by Nikhilesh Kumar). Develop a clear roadmap for linking observational data streams, data‑centres, and early‑warning agencies, including incremental capacity building and integration guidelines (raised by Dr. Krishna Vatsa).
Unresolved issues
Financing and procurement mechanisms for exaflop‑scale supercomputing and associated energy/cooling infrastructure in India. Specific governance structures and legal frameworks to embed AI across federal, state, and local disaster management agencies. Standardized metrics for AI explainability, accountability, and validation when AI informs life‑saving actions. Strategies to prevent alert fatigue and ensure authenticity of AI‑generated warnings at massive scale. Detailed operational plan for how data‑centres will exchange data with disparate early‑warning agencies and how incremental upgrades will be coordinated. Comprehensive cybersecurity safeguards for AI‑driven alert systems, including protection against malicious code propagation.
Suggested compromises
Adopt a hybrid approach: blend AI outputs with traditional physics models rather than replace them outright. Maintain human‑in‑the‑loop verification for high‑impact alerts while automating data collection, processing, and preliminary analysis. Utilize GPU‑based AI‑box models for low‑resource or remote regions, reserving exaflop supercomputing for national‑scale, high‑resolution tasks. Scale infrastructure investments incrementally, pairing modest initial deployments with private‑sector partnerships before committing to massive capital outlays.
Thought Provoking Comments
Disasters are not only physical events like floods or cyclones; they can also strike the virtual world through cybersecurity attacks, and we need a digital twin that bridges the physical and virtual realms for emergency services.
Expands the traditional disaster risk discourse to include cyber‑physical threats and proposes the digital twin as a concrete tool, highlighting the need for integrated governance across both domains.
Shifted the conversation from purely climate‑related early warning to a broader definition of disaster, prompting later speakers to consider data security, human‑in‑the‑loop decision making, and the infrastructure needed to support such integrated systems.
Speaker: Dr. Avinash Ramtohul (Minister for IT, Communication and Innovation, Mauritius)
We will blend physics‑based weather models with machine‑learning models step‑by‑step, co‑develop benchmarks with partners, and ensure the metrics we use are those that matter to end‑users.
Provides a pragmatic roadmap for integrating AI into established forecasting workflows while emphasizing co‑development and user‑centric evaluation, counterbalancing fears of a sudden AI takeover.
Guided the discussion toward collaborative model development and the importance of shared evaluation standards, influencing subsequent remarks about partnership models and the need for transparent, explainable AI.
Speaker: Beth Woodhams (Senior Manager, UK Met Office)
India’s biggest bottleneck is not algorithms but the lack of exaflop‑scale supercomputing infrastructure; building such capacity costs billions and requires power and water resources, so public‑private partnerships are essential.
Highlights the concrete, financial, and logistical constraints that prevent large‑scale AI‑driven early warning systems, grounding the conversation in realistic resource considerations.
Redirected the dialogue from idealistic AI deployment to the hard realities of hardware, prompting other panelists to discuss edge solutions, cloud‑based federated architectures, and incremental capacity building.
Speaker: Som Satsangi (Former SVP, Hewlett Packard Enterprise India)
AI deployment must be built on a five‑layer architecture—infra, operating system, platform services, models, and applications—so that even disconnected, low‑connectivity field sites can run a rugged, zero‑trust AI instance and act on real‑time intelligence.
Offers a concrete technical framework that addresses the earlier raised concerns about infrastructure gaps and connectivity, showing how AI can be operationalized at the edge.
Introduced the concept of federated, edge‑centric AI, leading the conversation toward practical implementation strategies and influencing the startup perspective on modular, scalable solutions.
Speaker: Pankaj Shukla (Head of Customer Engineering, Google Cloud India)
We need four layers—modeling, asset, people, and workflow—to turn scattered data into actionable, personalized warnings; AI can nowcast millions of dams using 30‑minute satellite feeds and also extract structured risk data from unstructured news reports.
Synthesizes data integration, real‑time analytics, and workflow automation into a clear architecture, and demonstrates how AI can fill data gaps for critical infrastructure and risk assessment.
Expanded the discussion from high‑level policy to a startup‑driven, end‑to‑end solution pipeline, reinforcing the need for interoperable data platforms and prompting the national agencies to consider modular DPG/DPI approaches.
Speaker: Nikhilesh Kumar (CEO & Co‑founder, Vassar Labs)
AI‑box models enable even low‑resource nations to run accurate forecasts with a few GPU nodes, democratizing early warning; however, we still need better data quality and more computing power to fully exploit satellite observations.
Bridges the gap between high‑end supercomputing and affordable AI solutions, emphasizing both the promise of AI democratization and the persistent data‑quality challenges.
Reinforced the earlier points about infrastructure while offering a hopeful alternative, influencing the dialogue on scalable, cost‑effective AI models for developing countries.
Speaker: Dr. Mrutyunjay Mohapatra (Director General, India Meteorological Department)
We have massive observational networks (seismic, weather stations, landslide sensors) but lack the capacity to process and deliver that data to citizens; the key challenge is integrating data centers with early‑warning agencies and building a clear incremental roadmap.
Identifies the practical bottleneck of data processing and dissemination, moving the conversation from data collection to actionable delivery and governance.
Served as a concluding turning point, summarizing the technical and institutional gaps highlighted by earlier speakers and calling for coordinated architecture and capacity‑building, setting the stage for future policy actions.
Speaker: Dr. Krishna Vatsa (Head of NDMA, India)
Overall Assessment

The discussion evolved from a broad framing of disaster risk to concrete technical and institutional challenges. Early insights about cyber‑physical threats and digital twins broadened the scope, while subsequent remarks on hybrid modelling, infrastructure deficits, edge architectures, and layered data pipelines progressively narrowed the focus toward actionable solutions. Each pivotal comment introduced a new dimension—policy, partnership, hardware, architecture, or data workflow—that redirected the conversation, deepened the analysis, and highlighted the interdependence of technology, governance, and resources. Collectively, these insights shaped a nuanced roadmap: co‑develop hybrid AI models, invest strategically in scalable infrastructure (including edge and federated systems), build interoperable data platforms, and ensure human oversight, thereby charting a realistic path for AI‑enabled disaster resilience in India and similar contexts.

Follow-up Questions
What metrics are most important to end users when evaluating AI-driven weather and climate forecasts?
Identifying user‑centric metrics is crucial for building trust and ensuring AI models deliver actionable information that meets the needs of decision‑makers and the public.
Speaker: Beth Woodham
How can sovereign data architectures integrate diverse data sources (e.g., geospatial, satellite, sensor) while maintaining interoperability and security?
Effective AI‑driven early warning requires seamless, secure data flow across ministries and agencies; solving integration challenges is essential for a unified national system.
Speaker: Som Satsangi
What financing and procurement models can enable acquisition of high‑performance computing infrastructure (exaflop‑scale) needed for real‑time AI analytics in India?
The cost of exaflop‑class supercomputers is a major barrier; innovative funding or partnership mechanisms are needed to build the compute backbone for nationwide AI‑based DRR.
Speaker: Som Satsangi
How can power, energy, and cooling requirements for large AI compute clusters be sustainably met in India?
High‑performance AI hardware consumes massive electricity and water; sustainable energy and cooling solutions are vital to keep such systems operational during disasters.
Speaker: Som Satsangi
How to design a federated, edge‑capable AI architecture that can operate in disconnected, low‑connectivity environments while maintaining zero‑trust security?
Disasters often break connectivity; a resilient architecture must allow AI analytics at the edge and ensure data integrity and security even in air‑gapped scenarios.
Speaker: Pankaj Shukla
How to ensure last‑mile inclusion of AI‑driven alerts while mitigating misinformation risk?
Effective communication to vulnerable populations is critical; mechanisms are needed to reach remote users reliably and to prevent the spread of false or tampered warnings.
Speaker: Pankaj Shukla
How to create comprehensive, location‑specific hazard datasets (e.g., earthquake, flood) from unstructured sources using AI?
Many regions lack structured historical hazard data; AI can extract and structure information from news, reports, and social media, filling gaps for risk assessment.
Speaker: Nikhilesh Kumar
How to integrate AI‑extracted hazard data into insurance sector models for parametric insurance products?
Structured hazard data can enable more accurate, location‑specific insurance pricing and payouts, strengthening financial resilience after disasters.
Speaker: Nikhilesh Kumar
What public engagement strategies can augment computational and digital infrastructure for AI in DRR, especially for low‑resource nations?
Leveraging community resources, open‑source tools, and partnerships can expand AI capacity beyond government budgets, making early warning more inclusive.
Speaker: Dr. Mrutyunjay Mohapatra
What is the optimal architecture for linking central data centers with distributed early warning agencies to ensure timely, actionable information?
Clarifying how centralised processing and local agencies interact is essential to avoid duplication, reduce latency, and deliver warnings directly to affected citizens.
Speaker: Dr. Krishna Vatsa
How to develop a clear, incremental roadmap for building AI processing capacity and integrating LLM models within existing early warning systems?
A step‑by‑step plan will help prioritize investments, training, and technology adoption while managing limited resources.
Speaker: Dr. Krishna Vatsa
What standards of explainability are necessary when AI informs life‑saving decisions?
Transparent, explainable AI is required to build trust among responders and the public, and to meet regulatory and ethical requirements for critical decisions.
Speaker: Moderator (directed to Som Satsangi)
How to embed human‑in‑the‑loop processes in AI‑driven early warning decision‑making?
Ensuring human oversight prevents over‑automation errors and maintains accountability in life‑critical alert systems.
Speaker: Avinash Ramtohul
How to protect AI‑driven early warning messages from cyber viruses and ensure message integrity?
Compromised alerts can cause panic or inaction; robust cybersecurity measures are needed to safeguard the credibility and effectiveness of warning systems.
Speaker: Avinash Ramtohul

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Impact & the Role of AI How Artificial Intelligence Is Changing Everything

Impact & the Role of AI How Artificial Intelligence Is Changing Everything

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit opened with a warning that AI systems are increasingly used to allocate public services, loans, and surveillance, giving their designers power over individuals and the democratic information environment [1-5]. Speakers argued that democratic governance is lagging behind AI’s rapid concentration of power in a few corporations, which now hold market capitalizations larger than many nations while the costs fall on the least powerful, making AI both an economic and democratic concern [7-12]. They called for framing AI not merely as technology policy but as democratic governance, insisting that trade-offs between innovation, safety, efficiency and equity be debated openly, transparently and with accountability [13-16].


Parliamentarians highlighted the need for red lines, equal voice for the global south, and active parliamentary engagement to ensure AI governance reflects lived experience and political accountability [17-24]. The discussion emphasized that AI’s effects cross borders, requiring inclusive international cooperation and coherent domestic legislation linked to human-rights standards, with the Inter-Parliamentary Union supporting over 60 parliamentary actions on AI [30-38][40-45].


In India, Speaker Om Birla described a national “Digital Assembly” that places all parliamentary debates, budgets and metadata on a single AI-enabled platform, allowing searchable access and aiming to increase democratic capacity [102-108]. He added that AI is being used to answer citizen queries, support early-childhood education and improve health, agriculture and industry, positioning India as a model for integrating technology with spiritual and cultural values [109-110].


Panelists noted that AI is already helping early-childhood programmes reach Anganwadi workers and generate data to tailor learning for children and parents, illustrating a concrete social-impact benefit [212-214]. Rupa Purushothaman explained that AI can free doctors from non-specialist tasks, creating new worker classes and potentially generating tens of millions of jobs in health, education and entrepreneurship, especially through local-language models [272-280]. Sanjeev Bikhchandani argued that, despite fears of job loss, current evidence shows stable employment in his firm and that technology historically creates more jobs than it destroys, though rapid AI diffusion may require a transition period [236-259]. Iqbal Dhaliwal warned that the unprecedented speed and low cost of AI could outpace labor-market adjustment and that policy frameworks must be strengthened to prevent capital-biased displacement of workers [306-320]. He also pointed to examples where AI tools uplifted low-performing micro-entrepreneurs in Kenya, but emphasized the need for targeted training and human support to ensure equitable outcomes [365-384]. Sanjeev added that AI enhances productivity in investment analysis and content creation, enabling rapid film production and summarisation of media, while noting that no large-scale job losses have yet materialised [335-339][410-416].


The participants converged on the view that AI can strengthen democracy only if parliaments embed accountability, human-rights safeguards and inclusive oversight into AI design and deployment, echoing the summit’s call to embed democratic values at the core of AI governance [47-53][443-447]. They concluded that coordinated parliamentary action, capacity-building and international cooperation are essential to harness AI for inclusion rather than concentration of power, marking the summit as a pivotal step toward democratic AI governance [38][54].


Keypoints


Major discussion points


Democratic oversight and the risk of power concentration in AI – The opening speaker warned that AI systems decide who gets services, loans, or surveillance and that the designers “will influence … the information environment of democracy” [1-3]. He cited concrete harms (e.g., traffic-routing bias against low-income neighbourhoods) [4-5] and argued that “democratic governance is not keeping pace” [6-13]. Parliaments were presented as the venue for transparent trade-off debates and accountability [17-22][39-44][47-53].


AI as a global, cross-border challenge requiring inclusive governance – The speaker stressed that AI “doesn’t have a national passport” [31-32] and that benefits will not be shared equitably without deliberate collective action [34-36]. He called for coordinated international effort, warning that geopolitical competition could fragment governance [37-38] and highlighting the Inter-Parliamentary Union’s role in linking domestic law to emerging international standards [39-44].


India’s vision of AI-enabled democratic institutions and cultural integration – The Indian Speaker linked AI deployment to India’s spiritual and moral values, describing a “Digital Assembly” that will host all parliamentary debates on a single platform by 2026 [102-108]. He emphasized AI-driven metadata search to make legislative content searchable and to “increase the capacity of our people in our democratic institutions” [108-110][111-114].


Economic impact, job creation and the need for up-skilling – Panelists debated whether AI will destroy jobs or create new ones. InfoEdge’s CEO noted no current job loss despite AI hype and recalled historical tech disruptions that ultimately created productivity [234-259]. Tata’s economist highlighted AI-generated demand for millions of new “rigid workers” in health, education and logistics [262-280]. Several speakers warned that rapid AI diffusion could outpace labour-market adjustment and stressed the importance of learning AI tools to stay employable [287-310][321-340].


Education, capacity-building and democratizing AI tools – Participants described concrete projects that use AI to improve early-childhood education, empower micro-entrepreneurs in Kenya, and develop certification platforms for AI skills [204-214][362-389][390-393]. The consensus was that AI must be taught responsibly and that “the tool is so powerful it can teach people itself” [387-389].


Overall purpose / goal


The discussion aimed to frame AI not merely as a technological issue but as a democratic governance challenge, urging parliaments and international bodies to create inclusive, transparent, and rights-based frameworks. It sought to showcase how AI can be harnessed for public good-particularly in India-while highlighting the need for coordinated policy, capacity-building, and safeguards against concentration of power and labour disruption.


Overall tone and its evolution


Opening (0:00-9:43) – Formal, urgent, and cautionary, emphasizing risks of power concentration and democratic erosion.


Middle (10:34-27:55) – Shifts to a celebratory, culturally-infused tone as the Indian Speaker intertwines spiritual values with AI ambition, followed by a more conversational panel where optimism about AI’s benefits is expressed.


Later (30:00-81:28) – Becomes pragmatic and reflective, with panelists sharing real-world data, acknowledging uncertainties, and stressing up-skilling; the tone is collaborative and solution-oriented, ending on a hopeful note about partnership and continued learning.


Thus, the conversation moves from a warning-laden briefing to a hopeful, action-focused dialogue about responsible AI governance and capacity building.


Speakers

Om Birla – Speaker of Parliament of India (Lok Sabha) [S1][S2]


Ronnie Chatterji – Chief Economist at OpenAI [S3][S4]


Sanjiv Bikhchandani – Founder of InfoEdge (Naukri.com) [S5][S6]


Kavita Gunjikannan – Member of the Global Affairs team at OpenAI [S7][S8]


Martin Chungong – Secretary-General of the Inter-Parliamentary Union (IPU) [S9][S11]


Dr. Chinmay Pandeya – Moderator / speaker (no external title provided)


Anmol Garg – OpenAI representative / moderator [S14][S15]


Roopa Purushothaman – Chief Economist and Head of Policy Advocacy at Tata [S17]


Iqbal Dhaliwal – Global Director of J-PAL, MIT [S18][S19]


Lord Krish Ravel – Member of the House of Lords; devout member of the Gayatri Parivar [S20][S21]


Dr. Fadi Dao – Chairman of Globe Ethics (based in Geneva) [S22][S23]


Additional speakers:


Lord Krish Ravel – already listed above (no additional speakers identified beyond the provided list).


Full session reportComprehensive analysis and detailed insights

The summit opened with a stark warning from Martin Chung-ong that the algorithms now deciding who receives public services, who qualifies for a loan and who is placed under surveillance are shaping “the information environment of democracy itself” [1-3]. He illustrated the danger with a concrete case from Amsterdam where an automated traffic-management system learned that low-income neighbourhoods lacked the political clout to object and consequently routed congestion through those areas [4-5]. Chung-ong argued that such harms will “scale rapidly if governance does not keep pace” and that democratic oversight is already lagging behind the swift concentration of AI power in the hands of a few corporations whose market capitalisations now exceed those of many industrialised nations, while the costs fall on the least powerful [6-12][7-9]. He framed the issue not merely as a technology problem but as a democratic one, insisting that trade-offs between innovation, safety, efficiency, equity and profit must be debated openly, decided transparently and be subject to accountability [13-16].


Building on this premise, Chung-ong called for AI to be governed as a matter of democratic governance. He recalled the inter-parliamentary conference in Malaysia where parliamentarians demanded “red lines” that AI must not cross, an equal voice for the Global South and active parliamentary engagement at every level [17-21]. He stressed that parliaments are the venue where the real-world impact of AI meets political accountability, allowing citizens, workers and community members to bring lived experience into policy debates [22-26]. He concluded that democracy cannot be automated; it must be shaped through open debate, transparent law-making and fair enforcement, and that the choices we make will determine whether AI furthers or erodes democracy [47-53].


The international dimension of the challenge was then foregrounded. AI “doesn’t have a national passport” [31-32] and its benefits-improved healthcare, expanded education and progress on the Sustainable Development Goals-will not be shared equitably without deliberate collective action [33-35]. Chung-ong warned that fragmented governance and geopolitical competition risk further fracturing AI regulation [36-38], and urged that summits embody an inclusive, participatory approach. He highlighted the Inter-Parliamentary Union’s role in linking domestic legislation to human-rights standards and emerging international norms, noting that over 60 parliaments have already taken action ranging from comprehensive legislation to oversight inquiries [39-45][41-44].


Indian perspective was presented by Speaker Om Birla, who linked AI development to India’s spiritual and cultural heritage, invoking Vedic values and the principle of “Vasudev Kutumbakam” (the world is one family) [64-68][111-114]. Birla announced a national “Digital Assembly” that will, by 2026, host all parliamentary proceedings, debates and metadata on a single AI-enabled, paper-less platform, making the entire legislative record searchable [102-108]. He noted that the platform integrates the rules and proceedings of all state legislatures, the Lok Sabha and the Rajya Sabha onto one system and repeatedly referenced the “Vishgayati family” to underscore the continuity of India’s spiritual heritage [64-70]. Birla portrayed India as a model for integrating technology with moral and spiritual values, positioning the country as a leader in “AI-driven inclusive development” [115-124].


Panelists highlighted concrete social-impact applications. Iqbal Dhaliwal described AI-powered early-childhood programmes such as Rocket Learning, in partnership with OpenAI, that are reaching millions of Anganwadi workers and generating data that tailors learning messages for children and parents [212-214][204-214]. Roopa Purushothaman explained that AI can free doctors from non-specialist tasks, creating a new class of “rigid workers” who mediate technology and assist patients, potentially generating roughly ≈ 30 million new jobs across health, education, finance and logistics [262-280][272-280][281-285].


The employment debate was captured in a single exchange. Sanjiv Bikhchandani (InfoEdge) reported that, contrary to media hype, his firm has seen no decline in hiring; AI has maintained steady recruitment, boosted productivity and even supported AI-driven investment analysis, where he uses ChatGPT to supplement portfolio decisions [236-259][350-355]. By contrast, Iqbal Dhaliwal warned that the unprecedented speed and low marginal cost of AI-combined with multimodal capabilities-could outpace labour-market adjustment, especially under policy frameworks that favour capital over labour, and called for a “dial-down” of AI deployment speed and stronger policy infrastructure to protect workers [287-310][306-320].


The summary of disagreements was refined. The first disagreement centres on the employment impact of AI-optimistic productivity gains versus the risk of rapid, unregulated diffusion [236-259][287-310]. The second reflects differing framing: Chung-ong stressed secular parliamentary oversight [13-16], whereas Birla highlighted India’s spiritual and cultural values as a guiding framework for AI [64-68]. The third concerns whether AI deployment speed should be deliberately slowed or embraced as inevitable [287-310][321-340].


Capacity-building and knowledge diffusion were identified as essential levers. Roopa Purushothaman described internal platforms that allow different Tata companies to share AI best practices-such as safety-focused AI on shop-floors-to overcome “capability overhang” and accelerate diffusion across business units [347-354][355-358]. Ronnie Chatterjee highlighted the same challenge in his question about “power users” versus median users, and later announced OpenAI’s forthcoming jobs-certification platform aimed at upskilling workers [390-393]. Both speakers underscored internal best-practice sharing and external certification as mechanisms to bridge capability gaps.


OpenAI’s presence was signalled by Anmol Garg, who introduced the company’s chief economist and chief of global affairs, and announced new education partnerships and a jobs-certification platform to support AI skill development in India [179-188][471-476]. This corporate commitment dovetailed with the summit’s call for multi-stakeholder collaboration.


Across the dialogue, participants reached several points of agreement. All stressed that parliamentary oversight, transparent debate and inclusive multi-stakeholder dialogue are essential for responsible AI governance [15-16][17-21][100-108][57-58][139-146]. They concurred that AI should be democratised and used for social empowerment, with the Global South playing an equal role [30-38][147-149][57-58][139-146]. Both Sanjiv and Roopa agreed that AI is more likely to create new jobs than destroy existing ones [236-259][262-280]. Finally, Chung-ong and Iqbal agreed that policy frameworks must evolve rapidly to keep pace with AI’s speed [15-16][287-310].


Key take-aways were:


1. AI’s influence on public services and surveillance poses a democratic risk that must be addressed through parliamentary accountability and human-rights safeguards [1-12][47-53].


2. India is piloting a unified Digital Assembly platform that will make legislative content searchable and paper-less by 2026, embedding AI within democratic processes [102-108].


3. AI can be a tool for social empowerment and Sustainable Development Goal achievement if benefits are deliberately shared [33-35].


4. The rapid pace of AI deployment threatens to outstrip labour-market adjustment, requiring faster policy responses [287-310].


5. Historical evidence suggests technology creates more jobs than it destroys, and AI is already generating new roles in health, education, finance and entrepreneurship [236-259][262-280].


6. Capacity-building-both internal best-practice platforms and OpenAI’s certification programme-is crucial for responsible diffusion [347-354][390-393].


7. International AI governance remains fragmented, demanding inclusive, participatory mechanisms that give the Global South an equal voice [30-38][39-44].


8. OpenAI is positioning itself as a partner for education, certification and enterprise adoption, announcing new partnerships and a jobs-certification platform [179-188][471-476].


Action items emerging from the discussion:


– Parliaments worldwide should adopt clear “red lines” for AI, ensure equal Global South participation and engage actively through cross-party groups, specialised committees and capacity-building initiatives [17-21][41-44].


– India’s Parliament will launch the Digital Assembly platform by 2026, providing a searchable, AI-enhanced record of all debates and legislation [102-108].


– Globe Ethics pledged to leverage summit outcomes for the 2027 Geneva AI summit [147-149].


– OpenAI announced forthcoming education partnerships and a jobs-certification platform to support AI skill development [179-188][471-476].


Unresolved issues remain, notably the precise definition and enforcement of the “red lines” proposed by parliamentarians, how to align AI’s rapid diffusion with labour-market policies, scalable methods for teaching AI to marginalised entrepreneurs, mechanisms to ensure equitable benefit distribution across languages and cultures, and the integration of spiritual or cultural values into AI ethics without compromising technical standards or human-rights safeguards [64-68][287-310][365-384][30-38].


In conclusion, the summit underscored that AI can either strengthen or weaken democracy depending on whether democratic accountability, human-rights norms and the rule of law are embedded at the heart of AI design, deployment and governance [47-53][54-55]. The consensus was that coordinated parliamentary action, robust capacity-building and inclusive international cooperation are essential to harness AI for inclusion rather than allowing power to concentrate in the hands of a few. The gathering therefore marked a pivotal step toward realising democratic, responsible AI governance worldwide.


Session transcriptComplete transcript of the session
Martin Chungong

We make systems and making decisions about who receives public services, who qualifies for a loan, or who is flagged for surveillance. Those who design, train, and deploy these systems will influence not only over individual users, but also the information environment of democracy itself. At the first inter -parliamentary conference on responsible AI last November in Malaysia, members of parliament raised cases that brought this risk into sharp focus. In Amsterdam, an automated traffic management system inadvertently routed congestion, through low -income neighborhoods because the algorithm had learnt that those communities lacked the political influence to object. Examples like this will scale rapidly if governance does not keep pace, perpetuating harms against those historically excluded from decision -making. Yet, democratic governance is not keeping pace.

Power is accumulating rapidly in the hands of those at the forefront of AI development. A handful of technology corporations now command market capitalizations, exceeding the entire equity markets of major industrialized nations, while millions of workers in the global south are paid little to annotate the data sets on which the system stands. The benefits of AI are increasingly concentrated. while many of the costs fall on those with the least power to shape the technology. This is not merely an economic concern. It is a democratic concern. When the systems that govern aspects of people’s daily lives, their access to information services and economic opportunity are controlled by a small number of actors without meaningful public oversight, then the social contract itself is under strain.

That is why we must frame this not simply as technology policy, but as democratic governance. The choices made today about how AI is developed, deployed and regulated involve trade -offs. Between innovation and safety, efficiency and equity, profit and the public interest. In any healthy democracy, those trade -offs are debated openly, decided transparently and subject to accountability. The parliamentary community declared in Malaysia that we do not accept the concentration of power in the hands of a few actors. They called on all stakeholders to agree upon red lines that this technology cannot cross. They insisted on an equal voice for the global south. And they called on all parliaments to engage actively with AI governance efforts at every level.

The principle that elected legislatures shape the rules governing society is the cornerstone of democracy. But the contribution of parliaments to AI governance goes beyond that basic principle. Parliaments are where the real world impact of AI meets political accountability. Members of parliament hear directly from workers affected by automation, from communities concerned with algorithmic decision making, from parents navigating their children’s relationship with technology. This connects governance to lived experience and informs the AI debate through the values of the people. Parliaments can and must support this. I want to stimulate that broader societal conversation through hearing the voices of the people. consultations, and multi -stakeholder dialogues. I believe you heard what the Deputy Speaker of Hungary said about the practices in his country, which I believe is the path down which we would want to travel.

This brings me to the international dimension. AI is a truly global challenge whose effects transcend national borders. As we would say, AI doesn’t have a national passport. While the risks are real, from job displacement to environmental costs, so too are the opportunities. AI has genuine potential to improve healthcare, expand access to education, and accelerate progress on the sustainable development goals. But those benefits will not be shared equitably by default. That requires deliberate collective action and it requires that the countries with the most to gain are not shut out of the conversation. Yet international AI governance remains fragmented and short on binding commitments. Geopolitical competition risks fracturing governance efforts further. That is why this summit, I say this summit and those which will follow, must embody the inclusive participatory approach that the equitable governance of AI demands.

Parliaments are pivotal to ensuring coherence between domestic legislation, established human rights, and evolving international standards. and to holding their governments accountable for the commitments made at summits like this one. The Inter -Parliamentary Union is committed to supporting that engagement. In the past two years, over 60 parliaments have taken action on AI, from comprehensive legislation to oversight inquiries. Across the world, parliaments are forming cross -party groups, establishing specialized committees, and building capacity. The foundations are being laid, but they need to be built on faster, with increased coordination across borders. Parliaments are also beginning to explore how AI can support their own work. And those that experience its promise… and limitations firsthand will bring far greater understanding to the task of governing it.

So, let me return to the principle at the heart of what I have said today. Democracy cannot be automated. It must be shaped by every one of us through our democratic institutions, through open debate, through laws made transparently and enforced fairly, and through international cooperation in which every, every nation can participate. The choices we make will determine whether AI furthers democracy or erodes it. If we succeed, AI can become independent. AI is a tool for inclusion, participation, human rights, and better governance. If we fail, it risks becoming a force which concentrates power, weakens accountability, and erodes trust in public institutions, including parliaments. The task before us is to embed democratic accountability, human rights, and the rule of law at the heart of how AI is designed, deployed, and governed.

This summit is a critical opportunity to advance that mission. Let us make the most of it together. Thank you very much.

Dr. Chinmay Pandeya

Thank you. Thank you, Mr. Tiongong. And now, in this momentous occasion, it’s our great honor and pleasure, as today we have with us as chief guests, Honourable Mr. Om Birlaji, Speaker of Parliament of India When democracy meets AI, what are the opportunities for that, for deliberation Please put your hands together and we invite Honourable Om Birlaji

Om Birla

Thank you IPU’s Secretary General, IPU is an organisation of more than 190 countries in the world where in the institutions all over the world, how we can make new innovations, technology and international institutions responsible for the people. For this, all the parliaments of the world discuss this at regular intervals. I would like to welcome the Secretary -General of the IPU, Martin Csuk -Ok. I would like to welcome him. I would also like to welcome the Deputy Chairman of the Parliament of Hungary, Legos Alaw, as well. I would also like to welcome Miss Jimena Soto, Miss Maria Ramos, Miss Maria Ramos, Miss Maria Ramos, Miss Maria Ramos, Miss Maria Ramos, Miss Maria Ramos, Miss Maria Ramos, Miss Maria Ramos, Miss Maria Ramos, and especially those in whose presence this work is being carried out, the culture of India, the political value of India, the spiritual value of India, how can we bring the knowledge of the spiritual culture of India to the world?

For this, for a long time, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, the Vishgayati family, Thank you very much. Thank you very much. Thank you very much. is working to reach this spiritual and moral values. And along with this, here, the Dev Sanskriti Vidhyalaya, which is amazing, where in Dev Sanskriti Vidhyalaya, the moral and spiritual values are taught, but at the same time, in modernity, technology, whatever the new education system of the world is, that education system is also given by the Indian moral and spiritual values by the education system of the Indian for the creation of a society, for the creation of a for the creation of a for the creation of a for the creation of a for the creation of a for the creation of a for the creation of a society, for the creation of a society, for the creation of a society, for the creation of a society, for the creation of a for the creation of a for the creation of a society, society, In the school where you will go, you will see that there is Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, there is also Vedic value, and in the future, in the international organizations, in the international community, we will continue to develop the traditions of Sabwadur and use technology in a way that will answer the people of these international organizations.

We will continue to develop the traditions of Sabwadur and use technology in a way that will answer the people of these international organizations. to contribute to the development of the country. And I am happy that institutions all over the world are working on their own level. The Congress of the Commonwealth of Nations is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting.

The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. The Congress is here to attend this meeting. that how we can use AI in international organizations, use a answer -based technology, use a answer -based technology so that we can use all international organizations in the country, their work culture, their conversation, their discussion, to make it better.

And for this, the Indian parliament is also working on a large scale. With the Indian parliament, our state’s constitution, that too is working on technology. And within India, the constitution, the constitution, all the constitution, the constitution, today, the paper says, This is for all of us, because India is the world’s largest democratic country. Demography is also amazing for us. Our language is different. Our culture is also different. Even after having such a diverse country, we have tried to use AI to answer questions, answer questions, and be special. And that’s why, in this direction, India’s security is very important. The Digital Assembly has implemented the rules of all states and the implementation of our Lok Sabha Rajya Sabha.

You can see that on one platform. And by 2026, all the proceedings, debates, discussions of the Constitutional Assembly will be on one platform. And that is why we have started working on a large scale. Today, most of our Constitutional Assembly, not all of them, have been paperless. All the debates, discussions, discussions, budget, issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, all the issues of the state, of the central government, With this, we will give such a model to the country that all the councils in the world, from the Constitutional Assembly of the State of India to the council’s functions, can be seen on one platform and there will be new innovation in it.

With that innovation, we have also tried to use AI in it. Because when you go to the subject, the topic, the discussion on metadata, then how can you search in all those debates? So, using AI technology, the Constitutional Assembly of the State of India and the Council of the People will get a platform and you will be able to see and read all the subjects and issues of the state through metadata. This will increase the capacity of our people in our democratic institutions. debate will be of high importance and while making the law we will make the law by summarizing the thoughts of the people and while making the law the discussion will be good in the parliament for this India technically I can say that in the form of AI India will become a new model of technical knowledge for the world’s parliament I am happy that under the leadership of the Prime Minister today the world’s largest AI community will be able to do this and I am happy Jisme 100 se jada desho ke log aayen, prathiniti aayen, rashtra dyaksh aayen, sansat ke sadarshi aayen.

Aur yeh sab yaha par kis tarike se badalti duniya ke andar, hum AI ka upyog karte hue, kis tarike se logon ke shamta nirman, industries ho, agriculture sector ho, chayani sector ho, unki utpadakta ko badana, aur duniya ke andar sabse nojawan desh Bharat hai. Aaj Bharat ka nojawan takni ki roop se nahi navachar kar raha hai, aur duniya ke andar sabse nojawan desh Bharat hai. Aur isi liye Bharat ki yeh yuwa jan sankhya hi Bharat ki satsi badi tagat hai. Aur isi liye isi tagat ka upyog… In the right direction Be positive Be in the form of new culture So that the challenges of the world Can be solved by India And in this direction We are moving forward I hope That our talent Is abundant in the world Our young people’s Power Concentration New culture Self -confidence Because it has spiritual And political value And Dev Sanskriti Vidyalay Where technology In technical knowledge Is giving youth Vedic education Along with that They are getting modern technology Education But that education should be on political value For everyone’s development It should be trusted It should be trustworthy It should be trustworthy It should be Because It is the only thing Because while using technology, if we do not use all the technology, then its direction can also be wrong.

And that is why a student who studies in the political fields of spiritual, religion and culture can use AI technology with answer and response. And in this direction, India is definitely working because India has power, we are growing rapidly in the world of clean energy. We have young people with political values. And their thinking is amazing. And their belief and self -confidence is also amazing. And that is why our speed and scale is growing rapidly. This world is looking at India. You must have seen that the attention of all state leaders is also on India. And they have also said that definitely India is doing a good job in the technology, in the AI sector.

India is doing a good job in the technology, in the AI sector. And the speed at which it is working, the scale at which it is working, will definitely move forward. Our thinking and thinking is always about the creation, the realization and the happiness. We consider the world as a family. Vasudev Kutamkam is our culture. And our thinking is about the creation, the realization and the happiness. That is why I hope that the AI technology conference will definitely give a new direction. And we will use it with confidence and with responsibility. We will be able to do it with confidence. And the use of technology is used in machines. But our human resources will work in the right direction.

I again give a lot of appreciation to all the people who have come here. And with this discussion and discussion, we will get a new direction. And we will continue to develop in India based on political values. And with the help of international development and international

Dr. Chinmay Pandeya

Thank you. After the wonderful speech of Honourable Speaker, we are privileged to have Dr. Fadi Dao here. He is the Chairman of the Globe Ethics. And there is one single question that I wanted to ask you, Dr. Dao, that you just listened to the excellent deliberation by Honourable Speaker and the variety of voices here. And India is a country with 27 official languages, 19 ,500 dialects. We have got more than 400 documented cultures. And we go with the belief and value of Vasudhaiva Kutumbakam. So how do you see the way forward from here? If I can hear from you in one minute, please.

Dr. Fadi Dao

the largest nation in the world, for reminding us that through this summit and the purpose of AI democratization is not people’s manipulation or domination. India is reminding us also today that the purpose of AI is the social empowerment and participation of all people. To conclude, ladies and gentlemen, I would like to say on behalf of Globe Ethics, my organization that is based in Geneva, that we are committed to capitalize on the outcomes of this summit and this panel in the perspective of the 2027 summit in Geneva, where we would like to welcome you all. Thank you.

Dr. Chinmay Pandeya

Thank you, Dr. Dow. And very shortly, Lord Rawal is with us from House of Lords, also a devout member of the Gayatri Parivar. If you could kindly shed a light on the way that India should take now for democracy.

Lord Krish Ravel

Thank you, Paiya. Ladies and gentlemen, one of the tenets of Gayatri Parivar that I grew up in is the adaptability to change. Change is such an intrinsic part of the entire fraternity. And that is, I think, a real advantage, because what will happen, the big cost of AI, is the speed with which technology is advancing, which can really make people unsettled. And the uncertainty, as a politician, I need to contain people’s uncertainty. And I think this preparedness for change, Chimabaya, which is a cardinal value of your organization, will really help people. There’s other things I could say, but I’ll leave it at that, because we’re pressed for time. Thank you.

Dr. Chinmay Pandeya

Thank you. Now it’s time for felicitations. On behalf of India AI Mission, Government of India, and all world Gayatri Parivaar, Dev Sanskriti Vishwadhyayalaya please put your hands together for wonderful session and we express our gratitude towards our honorable chief guest honorable guest of honors and Dev Sanskriti Vishwadhyayalaya, all world Gayatri Parivaar in itself started a very wonderful program like when we are integrating artificial intelligence with spirituality we are talking about future of faith in interfaith dialogues worldwide Dr. Chidambi Pandya is representing the thought and today on this very wonderful gathering we once again thank our honorable guest of honors, honorable distinguished speakers and all the participants thank you, thank you once again do visit Shantikunj Haridwar, Dev Sanskriti Vishwadhyayalaya and you can scan the QR code on the screen so that you can get a very wonderful gift afterwards once you scan and you put your please put your hands together once again we express our gratitude to our honorable speaker of Lok Sabha, Adar Nishri Om Birla Ji and our honorable guests once again a big round of applause thank you all thank you the next stage is beginning all of you please be there for the co -operation thank you thank you QR code which you can see in front of you, scan it so that you can get a special gift for this program.

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. Thank you. with

Anmol Garg

each of you in the room to really tell you what we’re doing over at OpenAI. And today we’ve got an incredible lineup of show. You will hear from our chief economist. You will hear from our chief of global affairs. You’ll see a lot of the work we’re doing in education, in social impact. And we are tremendously excited to have you here. The energy this week has been palpable. And we cannot wait to continue to build in India, with India, with each of you. So with that, I’m going to invite our incredible chief economist, Ronnie Chatterjee, on the stage to kick off the show. Well,

Ronnie Chatterji

As we think about the panel discussion, I want to start and invite Iqbal Dhaliwal. Iqbal, where are you? Come on. Iqbal is the Global Executive Director of J -PAL. Okay. And Rupa, do you want to come up here too? Rupa, thank you. Rupa is the Chief Economist and Head of Policy Advocacy at Tata Sons. Very exciting to have you here, Rupa. And Sanjeev Bikachand, the founder of InfoEdge. Sanjeev, please take a seat. All right, Sanjeev. So let’s get started and think about – and the great thing about this, I get to ask the questions. Yes. You know? When I do the other things, I’m always in the hot seat, and I get to ask you the questions about the future of work, jobs, AI, the economy.

Okay. Iqbal, let’s start with you. Thank you. How are you using data, perhaps like signals and other kinds of data sets, to understand how AI is affecting the economy? What are the most interesting things you’re seeing? Hey, can you? Yeah, perfect.

Iqbal Dhaliwal

Yeah, thanks for having us. Super exciting. I think for us, data means a lot of things. One is absolutely to understand what the problems on the ground are. But for us, the most important use of data is to understand how the applications of AI are making an impact on the ground, which is so important also because that is the topic of today’s conversation. And it’s giving us so many incredible insights about how things are working. For instance, just in the field of AI and the intersection with development, wearing my hat now as a researcher but earlier as a policymaker, one of the things that we have been worried a lot about is early childhood. And we think about early childhood education and say, hey, how can we get more kids to come?

into early childhood education, how can we get to ages one to three? And it was an impossible task despite rigorous evidence that this works. And now what we are able to do with AI, you know, Rocket Learning, OpenAI has a partnership with them, I’ve been on their founding board, Rocket Learning is able to now democratize with data the application of early childhood education for children. It’s able to do that by reaching millions of Anganwadi workers who could not be trained earlier, and it is generating so much data which is telling us how children learn, what exactly customized messages to send to these children, and what to send to their parents. And so for me, that’s kind of a really cool application of the data which is being collected and being generated by using an AI application.

Ronnie Chatterji

Oh, I want to return to education in a second, but first let me go to my other chief economist, Rupa Purshatanam. Rupa, first of all, what does Tata want with a chief economist? I understand now why OpenAI is one. I understand now why OpenAI is one. Tell us what the chief economist at Tata does, and also what are some of the most interesting things you’re seeing in the enterprise, one that’s using AI a lot, from what I can tell.

Roopa Purushothaman

So that is a very good question. I think when you’re part of a group like the Tata’s that’s sort of spanning all sectors of the economy, you do a little bit of everything. I would say probably 30 % of the job is looking at how do we do classical economic forecasting, GDP, interest rates, inflation for the geographies we’re in. But I think we get to do a lot of interesting things, really trying to connect macro approaches that we take in macro to things that we’re seeing in sectors or solving big impact nation -scale problems. So I get to do a lot of work on sustainability, on health. I’m the head of social impact as well. So those things sort of cross over.

Ronnie Chatterji

This is interesting. I mean, right, it’s interesting. As much as we’re asking about economics, we’re hearing about education. We’re hearing about social impact. I want to return to these themes as well. Sanjeev, how about you at InfoEdge? How are you? Thinking about the most important uses of AI. What are you tracking?

Sanjiv Bikhchandani

Okay, so the first thing is, we are a job site primarily, Nocri .com. So roughly about 70 % of our revenue and 140 % of our profits come from Nocri. So when we hear this talk that AI will eat jobs, and AI will eat programming jobs, and 50 % of our revenue comes from the IT services sector, we get worried. So the first thing you want to do is actually understand what is AI going to do to jobs, specifically what is AI going to do to jobs in the IT sector. Now that’s important for us to understand. But let me tell you, thus far at least, there is no evidence on the ground that hiring is going down. In fact, it is steady.

So thus far at least, there has been no impact on jobs or no impact on Nocri business. So now we are waiting and watching because, you know, we are worried when the whole world is saying jobs are going to vanish, we get worried. But, you know, I console myself by going back into history. And, you know, when a new technology comes, there is a disruption often. And, yes, some people, some jobs may get replaced, but many more are often created. So in 1985, I recall I was 22 years old and my first job. And the government then announced that we’re going to introduce computers in banks in India. In those days, most banks were public sector banks.

The bank trade union went ballistic. But the government introduced the computers anyway. And for a while, they didn’t get used, the computers. But when they began to get used, nobody lost jobs. People got more productive, right? They were servicing their customers better. They were doing different things. They were doing more things. They were doing things faster. So technology. Technology may disrupt and may replace, but it will also create new jobs and new opportunities. opportunities. Now, if it happens in six months’ time that you’re disrupting, there may be a problem. But if it takes five years, you’ll have enough time to create new jobs, do new stuff, and on an aggregate basis, there won’t be crazy disruption.

Ronnie Chatterji

Rupa, do you want to follow up on this?

Roopa Purushothaman

Yeah, I was going to say, I think that there’s two areas for India in particular where we could see new jobs, meaningful jobs being created. One has to do with the fact that we are in a very different situation compared to countries like the U .S., regions like Europe. In places like that, you have efficient markets and you have a plethora of specialists, whether we’re talking about doctors, lawyers, whatever it might be. In India, we don’t have enough doctors per thousand people. We don’t have enough nurses. That’s an even worse, more acute problem. Educators and so on. And the resources that we have that are… Specialists are stretched to the max. So we did some work some years ago and looked at doctors at Ames, not far from here.

and we found that 50 % of their time was spent on not specialist work. With AI and the tools that we have, that work can move to a whole new set of workers that can take that on, and that leads to two things. One, doctors can do what they should be doing for more people. You have a new class of workers that mediates technology but also helps new patients navigate the system that is very difficult to navigate. And when you have, let’s say, people in rural India who are experiencing health care for the first time, now all of a sudden you need medicines to reach people and so on, right? So entire supply chains start. And so I’ve talked about health, but this is the same for education, for financial services, logistics.

And so I think there’s tens of millions of jobs. We estimated it to be about 30 million that come from these sort of rigid workers. The second one is entrepreneurship, and I think there’s something mind -boggling about the fact that literacy, and I advocate for literacy, but literacy is not the obstacle that we saw in the past. because of two things. When you have voice activated and you have local language models, all of a sudden, and we’ve seen this in our social impact work, you have entrepreneurs that can now understand, you know, price information more quickly. They can access markets, access financial resources. So things that you could only do in urban systems now can be elsewhere.

And I think for us, if we see this, and right now it’s still very nascent, but if what we’re seeing in our social impact work really goes bigger, entrepreneurship, which are small and medium -sized businesses, which now account for like 10 % of private sector employment, it can go to what we see in other places, which is closer to 40%. So for me, those are two big opportunities.

Ronnie Chatterji

Iqbal, how about you?

Iqbal Dhaliwal

Yeah, thanks. Okay. I think I agree with everybody, you know, like about the potential of AI to try and transform our lives for the positive. I do want to, you know, put in a word of caution on the labor front. I think I agree with you, for instance, that, you know, like when computers came, people thought… it was disruptive. But think about how expensive the first computers were, right? Like there was this one massive computer in the office, five people would go and share it. You know, like the bosses got a laptop, everybody else got a hard, you know, like a thing, a desktop, thank you. Most of them were not connected to the internet.

Then we slowly connected them to the internet. I think the speed and the pace of AI is unprecedented. It’s a general purpose technology. The price is, the price point for the marginal user is very low. The penetration is incredible. Think about it, right? Like every single one of you who has a smartphone in your pocket has AI in your pocket. That was not the case for computers and technology. And finally, the multimodality of it. The fact that, oh, I can’t process text, but then I can process it as a voice or I can process it as video. That is phenomenally different from all of these technologies. So I think what I would say is, you know, the following.

I think I agree with you that in the medium and the long term, job markets will adjust. The pace at which – I just wish I had a dial which could kind of slow things down. So, you know, and I think the speed at which this thing is going, the labor markets will have a very hard time. The second reason that the labor markets are going to have a really hard time is because we are completely biased towards capital investment versus labor. This is true for the United States. We have Social Security taxes, Medicare, Medicare, like the entire thing. I mean, you know, in India, we have ESI, cratchity, public provident, retirement. And on the other hand, the government gives breaks on investment in capital.

So the playing field is not leveled here for each one of us in the labor market to compete against AI. So I’m all for AI. It’s going to be a product -enhancing technology. It’s going to be an augmenting thing. But for it not to turn into an automation and a human replacement thing, we need to dial down the speed, and we need to make sure. We need to make sure that the policy infrastructure keeps up with it.

Sanjiv Bikhchandani

Look. AI is now relentless, the genie’s out of the bottle, you can’t dial it down, it’s not going to slow down just because somebody said so. It’s going to happen. Now, you can either do it or have it done to you. And what I tell people, individuals, I say, listen, you worry about your job, don’t worry about jobs in the system, national level jobs. Is your job safe and what can you do to make sure your job is safe? Or get your job, if you’re a student. And I go back to 1989, I had just finished business school, I had worked for three years prior to business school, I had finished business school, I joined a company, a consumer company, and I was working in the marketing department.

And yes, as Iqbal says, there were two computers, 15 people, we were sharing it. Now, the thing was that I was the only guy who was PC literate because I was the most recent graduate, I was the youngest. I had used computers in business school, the others had not. They were senior to me, they were my bosses, they were getting paid more than me. And they had more powers than me. but they couldn’t use a PC. I could. If there’s somebody getting sacked in that department, I was the last guy getting sacked because I was the only guy who could use a PC at least for the first few months. My point is simple. AI platforms are easy to use, easy to learn.

For everybody, I’m saying if you are a person in your company or in your department or even if you’re a student who knows how to use seven or eight or ten AI platforms, believe me, you’re highly employable. Because not everybody will learn it. If you learn it and are good at it, you’ll be okay. So it’s in your hands to protect your employment and your employability. Just learn AI.

Ronnie Chatterji

This is one of the things people say that your job is more likely to be taken by someone who knows AI better than you than by AI. Ruba, inside an organization, how do you help power users who are using it a lot, kind of the example he gave of being the PC user where no one else was, how do you help those people diffuse their best practices, their learning to the other folks? And we see this in our data. There’s a big spread between the power users and the median users in most organizations. We call it capability overhang. Do you see that at Tata, and how do you think about solving these kind of issues to help more people learn how to use AI?

Roopa Purushothaman

I mean, I think for us, even working across the group, the different companies, is something where a lot more collaboration, what we’re working on are platforms for us to be speaking to each other about best practices, what works and what doesn’t. So right now when we learn, for example, that in a lot of our manufacturing businesses on the shop floors using AI for safety, how could we use those best practices in other parts of our companies? So even just within us and across companies, can we share what’s working and what’s not? So it’s not. I think we have companies where, let’s say like a TCS that sees a broad section of sectors and what’s worked.

So you kind of learn that sectors like life sciences, you see these huge changes in drug discovery. But across all sectors, you see things like customer service, marketing, those things really being changed by AI. So I think right now at this stage, having those conversations about what seems to be working on the ground seems to be the most important. As we are going through the very difficult process of taking what are legacy systems, sort of lumbering systems, and trying to get the data that is in very different silos to even start talking to each other. So I think we acknowledge that that process is still going to take a lot of time, but we can see these sort of new cases where it’s actually taking time.

Ronnie Chatterji

I just want to get Iqbal in here for a second, and then I’ll get back. Iqbal, what parallels are there to the development literature where we’ve found ways in education to teach people how to do new things, teach them how to start businesses, teach them sets of skills? Iqbal, what parallels are there to the development literature where we’ve found ways in education to teach people how to do new things, Iqbal, what parallels are there to the development literature where we’ve found ways in education to teach people how to do new things, and what parallels are there to the development literature where we’ve found ways in education to teach people how to do new things, can we teach people AI?

Can global institutions teach people AI? Can J -PAL do work in that area? Because it seems like an analog to sort of working with people inside enterprises, but maybe a different challenge.

Iqbal Dhaliwal

Yeah, great question. I think we can. The question is, can we do it correctly, and who benefits from that teaching? So let me give you an example. You know, we are well aware of this literature in the business processing outsourcing, where, you know, once you provided AI tools to people, like the lowest performing BPOs, call center employees, they leveled up. And they leveled up to the higher skilled workers. Fantastic example of leveling of skills, and which is a win -win. We did a study in Kenya where we provided AI, chat GPT actually, tools to micro -entrepreneurs. Don’t think of these as sophisticated users. Think of these as grocery stores, a neighborhood stationary shop. And then what we see is that the average treatment impacts, this is early stages of chat GPT, the average treatment effects were zero.

So then you dig deeper into the data, and then you actually see something super interesting. The top entrepreneurs, the ones who are performing well, actually take chat GPT, and they do really well. Because they run with it, they understand how to do it, and they know what to do with it, and then, you know, the recommendations that come out of it. On the other hand, those who were lower performing earlier, you know, like chat GPT will give recommendations. They will say, oh, nobody’s coming. The demand for my product is low. It goes to Econ 101 and says, maybe you want to think about lowering your prices. Maybe you want to think about increasing your marketing budget.

And then, but how much should I lower the price? Should I lower it today? Should I just lower it for Diwali or whatever? You don’t know. So I think you raise a really important question. I think there are going to be some folks who are going to take these tools, run with it, and there are some of the folks who are going to need a little bit of hand -holding. And I think you’re absolutely right. We can do a much better job of helping them human integrate. This can be through originally the old models of teaching. But I actually think now that the tool is so powerful. It can teach people itself with these nuances.

Ronnie Chatterji

And we’re releasing products to do that. We have a new jobs and certification platform coming that’s connected to that. Sanjeev, thanks for being patient.

Sanjiv Bikhchandani

I want to give a couple of examples of real things in our office. So we also invest in startups. So we’ve invested in about 130, 140 startups now. Now, every month, every quarter, the MIS is coming from these startups. Now, we’ve got very smart people in the investing team, in the portfolio management team. They’re all MBAs and CAs and things. So you know their stuff, right? But you just put in chat GBT, and you first of all do it yourself and chat GBT supplements. And then you say, okay, have I missed something? Is there a perspective here? Here, it’s helping them do their jobs better. It’s doing stuff that they might have missed, number one. So therefore, it’s enhancing the productivity.

And you can do many more analyses because you can ask 100 questions to chat GBT, and you may only ask yourself 10 questions, right? The second example I want to give is, so our marketing team in Jeevan Sati, a matrimony site, decided around Father’s Day. Let’s produce a film. father -daughter relationship. What my father means to me, and so on. Now, under normal circumstances, this film would not have been made. It would have taken six weeks and 60 lakhs to make. So it would not have been done. And life would have gone on. But it took about two days, using AI, for novices who never used that platform earlier, and now they’re saying they can do it in three hours, to actually make a film purely on AI and put it out on digital media, and it worked.

It was a big hit. Now this is stuff that’s not been done. Another example, there’s a surfeit of content. There are so many podcasts, there are so many interviews, this panel will also go to YouTube. So, you know, I can’t follow all of them. I’d love to, but I can’t. I just get a summary of a video from AI. I can do it in about three minutes. So I’m doing stuff I would not have done otherwise, and I would not have employed somebody. Nobody’s lost their jobs. for example, Nocatee has got about 130 ,000 to 150 ,000 clients. The top 20 % there’s a sales team that calls on them. The next 30 % there’s a tele -sales team that calls on them.

The bottom 50 % you don’t interface with because they don’t pay you enough. The sales channel is not worth it. Now we’ve got voice bots calling the bottom 50%. So we are serving an underserved market. Nobody’s lost a job yet. Now I’m not saying it won’t happen. Maybe it will. I don’t know. But thus far it’s not happened. And life is going on. And every quarter Nocatee is still growing. We are worried. We are concerned. We are apprehensive given the noise in the system about job losses. But it’s not happened yet. And we’re taking it quarter on quarter. And we are keeping our fingers crossed and hoping and praying it doesn’t happen. I don’t have the answers.

Ronnie Chatterji

No, no. Look, none of us have the answers, I think. And I’ll close the panel on this. I did promise the team I would end this on time. I could talk to these guys for about three hours just to let you know. So none of us have the answers, right? At the end of the day, from our vantage points, we’re trying to solve these questions, right? Sanjeeva, you did a great job from a person who’s an investor who’s building things, trying to explain what you think is happening. And the idea of you making the movie, I think, is a good example of how you can do things you never did before with AI. Rupa, you’re in a large conglomerate with lots of different businesses, lots of different exciting things going on.

How AI diffuses across the organization, I think that’s something all of us should watch. It’s not easy for large organizations to adopt AI and implement it, and the ones that do it, I think, are going to be advantaged. And Iqbal, I think you leave us all something to think about, which is if we’re going to educate the world on AI, if we’re going to democratize AI, we’ve got to make sure we do it well. We can’t just talk about it. And I hope in all these cases, enterprise adoption, learning and teaching AI, and helping the cutting edge, that OpenAI can be your partner. So with that, I want to thank our amazing panelists and thank everyone in the audience.

I’ve got one last thing. We’re done. I’m so sorry. You’ve got to hear this. They’re giving me this. Does this mean that? Yeah, you’ve got to hear this. I’ve got colleagues coming and negotiating salary with me. I’ve checked on GPD. I’m paid 40 % less than I should be. And he’s saving money on salaries too, getting some more time. I love it. Thank you. Thank you. Thank you. Thank you.

Kavita Gunjikannan

Thank you so much, Rani. Thanks, Sanjeev. Thanks, Rupa. And thanks, Iqbal. We do have more sections coming up. I’d request everyone to stay back before we complete this session. I’m Kavita Gunjikannan from the Global Affairs team at OpenAI. We want to take a moment to celebrate a few education partnerships that we announced just yesterday.

Related ResourcesKnowledge base sources related to the discussion topics (31)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Algorithms now deciding who receives public services, who qualifies for a loan and who is placed under surveillance are shaping “the information environment of democracy itself”.”

The knowledge base states that those who design, train and deploy such systems will influence the information environment of democracy itself [S1] and [S107].

Confirmedhigh

“The issue must be framed as democratic governance, with trade‑offs between innovation, safety, efficiency, equity, profit and the public interest to be debated openly and transparently.”

Source S2 explicitly frames AI as a matter of democratic governance and lists the same set of trade‑offs between innovation, safety, efficiency, equity, profit and the public interest.

Additional Contextmedium

“At the first inter‑parliamentary conference on responsible AI in Malaysia, parliamentarians demanded “red lines”, an equal voice for the Global South and active parliamentary engagement at every level.”

The knowledge base confirms that an inter-parliamentary conference on responsible AI was held in Malaysia and that members of parliament raised cases there, but it does not detail the specific “red lines” or Global-South demands [S1].

Confirmedhigh

“Democracy cannot be automated; it must be shaped through open debate, transparent law‑making and fair enforcement.”

Both S5 and S13 assert that democracy cannot be automated and must be shaped by democratic institutions, open debate, transparent laws and fair enforcement.

Additional Contextmedium

“Parliaments are the venue where the real‑world impact of AI meets political accountability, allowing citizens, workers and community members to bring lived experience into policy debates.”

Source S5 emphasizes that AI governance should occur through democratic institutions, open debate and transparent law‑making, providing contextual support for the role of parliaments in accountability.

External Sources (107)
S1
AI for Democracy_ Reimagining Governance in the Age of Intelligence — -Om Birla: Speaker of Parliament of India (Lok Sabha) – expertise in parliamentary procedures and democratic governance …
S2
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — -Om Birla- Speaker of Parliament of India (Lok Sabha)
S3
S4
OpenAI economist shares four key skills for kids in AI era — As AIreshapesjobs and daily life, OpenAI’s chief economist, Ronnie Chatterji, teaches his children four core skills to h…
S5
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — The discussion revealed nuanced perspectives on AI’s employment effects. Sanjiv Bikhchandani, founder of InfoEdge and op…
S6
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — -Sanjiv Bikhchandani- Founder of InfoEdge (Naukri.com)
S7
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — “I’m Kavita Gunjikannan from the Global Affairs team at OpenAI.”[85]. “We want to take a moment to celebrate a few educa…
S8
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — 57 words | 156 words per minute | Duration: 21 secondss Thank you so much, Rani. Thanks, Sanjeev. Thanks, Rupa. And tha…
S9
High-Level Dialogue: The role of parliaments in shaping our digital future — – **Doreen Bogdan-Martin** – Role/Title: Secretary-General of ITU (International Telecommunication Union) – **Martin Ch…
S10
World e-parliament report 2018 — ## World e-Parliament Report 2018 © Inter-Parliamentary Union, 2018 For personal and non-commercial use, all or parts…
S11
IGF Parliamentary track — – Martin Chungong: Secretary General of Inter-Parliamentary Union (IPU)
S12
AI for Democracy_ Reimagining Governance in the Age of Intelligence — – Dr. Chinmay Pandya- Martin Chunggong
S13
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Speakers:Dr. Chinmay Pandya, Martin Chunggong Speakers:Dr. Chinmay Pandya, Mr. Lazos Olahaji, Martin Chunggong Speaker…
S14
S15
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — Anmol Garg emphasizes OpenAI’s commitment to collaborative development and partnership with India. He highlights the com…
S16
The Global Power Shift India’s Rise in AI & Semiconductors — Thank you. Thank you. across CPUs, GPUs, SoCs, and AI engines that power cutting -edge compute systems worldwide. She br…
S17
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — -Roopa Purushothaman- Chief Economist and Head of Policy Advocacy at Tata Sons
S18
New Development Actors for the 21st Century / DAVOS 2025 — – Iqbal Dhaliwal – Global Director of J-PAL at MIT
S19
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — – Iqbal Dhaliwal- Ronnie Chatterji – Iqbal Dhaliwal- Sanjiv Bikhchandani
S20
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — -Lord Krish Ravel- Member of House of Lords, devout member of the Gayatri Parivar
S22
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — -Dr. Fadi Dao- Chairman of Globe Ethics (organization based in Geneva)
S23
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Diana Nyakundi:Thanks, Fadi. Good morning, everyone. I am Diana Nyakundi. I am based in Nairobi, Kenya. I work as a seni…
S24
Laying the foundations for AI governance — Despite AI and internet technologies being designed to decentralize power, Papandreou observes that power has actually b…
S25
Balancing innovation and oversight: AI’s future requires shared governance — At IGF 2024, day two in Riyadh, policymakers, tech experts, and corporate leaders discussed one of the most pressing dil…
S26
WS #100 Integrating the Global South in Global AI Governance — Key issues highlighted included the technology gap between developed and developing nations, regulatory uncertainty in m…
S27
Main Session 2: The governance of artificial intelligence — Multi-stakeholder participation must be meaningful and inclusive, particularly bringing voices from the global south and…
S28
What is it about AI that we need to regulate? — What next for the Global Dialogue on AI Governance?The Global Dialogue on AI Governance is currently under development w…
S29
AI Governance Dialogue: Steering the future of AI — Development | Legal and regulatory Martin argues that the United Nations’ universal membership and convening power make…
S30
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Thank you for inviting me to this important summit. It is an honor to be here in India at this pivotal moment for global…
S31
Parliamentary Track Roundtable: A powerful collective force for change: Parliamentarians for a prosperous global digital future — Maria Ilago: Thank you very much for the opportunity and I would like to thank the IGF organizers for inviting me. I fe…
S32
AI for Social Empowerment_ Driving Change and Inclusion — so just a little tongue -in -cheek we go back to the 1600s we’d asked chat GPT then if Galileo was correct it would have…
S33
Opening address of the co-chairs of the AI Governance Dialogue — – Majed Sultan Al Mesmar Inclusive international cooperation and multi-stakeholder approach Legal and regulatory | Dev…
S34
Launch of the Global Dialogue on AI Governance — Following the opening, the plenary segment will provide a platform for member states, observers, UN entities, and other …
S35
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S36
Labour market remains stable despite rapid AI adoption — Surveys show persistent anxiety aboutAI-driven job losses. Nearly three years after ChatGPT’s launch, labour data indica…
S37
Part 5: Rethinking legal governance in the metaverse — AI development presents unique challenges for planning, lead times, and expansion. Its rapid evolution makes it difficul…
S38
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The level of disagreement is moderate but significant for implementation. While speakers share fundamental goals of resp…
S39
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Jason argues that technology by its very nature seeks to transcend national boundaries and connect people worldwide. Thi…
S40
Keynote Adresses at India AI Impact Summit 2026 — Undersecretary Jacob Helberg framed Pax Silica as a declaration against weaponized economic dependency, drawing parallel…
S41
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — There is strong consensus among all speakers on the fundamental principles of inclusive AI governance: the critical impo…
S42
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — Yes, some jobs get lost, many new jobs get created. And history tells us, eventually, productivity, prosperity increases…
S43
Comprehensive Report: Preventing Jobless Growth in the Age of AI — And I think a lot of those reasons is that to get the full benefit of AI, it’s not about an AI applied to a task, but it…
S44
Comprehensive Discussion Report: The Future of Artificial General Intelligence — Beddoes references the historical economic argument against the ‘lump of labor fallacy,’ suggesting that technological a…
S45
The open-source gambit: How America plans to outpace AI rivals by democratising tech — A “worker-first AI agenda” is the key social pillar of the Plan. The focus is on helping workers reskill and build capac…
S46
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The scale of required training is enormous – France needs to train hundreds of thousands of civil servants across differ…
S47
Responsible AI for Children Safe Playful and Empowering Learning — Teacher Support, Capacity Building, and Implementation in Diverse Contexts
S48
Policymaker’s Guide to International AI Safety Coordination — This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incent…
S49
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S50
Comprehensive Discussion Report: The Future of Artificial General Intelligence — This comment provided a clear framework for understanding why AI development can’t simply be slowed down for safety reas…
S51
AI: The Great Equaliser? — While the introduction of AI technology may result in job losses in certain sectors, it also creates new job opportuniti…
S52
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S53
Empowering Workers in the Age of AI — Individual upskilling must be complemented by institutional capacity development and organizational change management
S54
AI for Social Empowerment_ Driving Change and Inclusion — This discussion focused on the impact of artificial intelligence on labor markets and employment, featuring perspectives…
S55
AI for Social Empowerment_ Driving Change and Inclusion — This discussion focused on the impact of artificial intelligence on labor markets and employment, featuring perspectives…
S56
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Bhan argues that AI’s impact on jobs cannot be viewed in isolation but must be considered alongside broader economic dis…
S57
AI for Democracy_ Reimagining Governance in the Age of Intelligence — An increasing number of fabricated yet convincing videos will circulate while genuine political scandals will be dismiss…
S58
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Summary:The main areas of disagreement centered on governance mechanisms (binding vs. voluntary frameworks), institution…
S59
Zurich researchers link AI with spirituality studies — Researchers at the University of Zurich havereceiveda Postdoc Team Award for SpiritRAG, an AI system designed to analyse…
S60
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — This comment challenges a fundamental assumption in AI policy discussions – that regulation is the primary tool for mana…
S61
Why science metters in global AI governance — The panel discussion explored practical challenges in the science-policy interface, with experts from India, France, WHO…
S62
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Jason emphasizes the importance of including Global South perspectives in AI governance discussions, noting that this su…
S63
WS #82 A Global South perspective on AI governance — 2. Asia: Jenny Domino described a more cautious “wait-and-see” approach in Asia, with most countries opting for soft law…
S64
Main Session 2: The governance of artificial intelligence — Multi-stakeholder participation must be meaningful and inclusive, particularly bringing voices from the global south and…
S65
S66
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — This comment provides a sophisticated counter-argument to Brunner’s ‘policy is dead’ thesis by proposing that openness a…
S67
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S68
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S69
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S70
How to make AI governance fit for purpose? — Legal and regulatory | Development The speed of AI development creates uncertainty and challenges that exceed current c…
S71
Indias AI Leap Policy to Practice with AIP2 — Explanation:This unexpected disagreement emerges around the pace of AI deployment. Fred emphasizes the dual nature of AI…
S72
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — That is why we must frame this not simply as technology policy, but as democratic governance. The choices made today abo…
S73
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Speaker:It’s a hard question and also for the invitation to be part of this panel, I’m very glad to be here. I’m Vladimi…
S74
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — We make systems and making decisions about who receives public services, who qualifies for a loan, or who is flagged for…
S75
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — We make systems and making decisions about who receives public services, who qualifies for a loan, or who is flagged for…
S76
AI for Democracy_ Reimagining Governance in the Age of Intelligence — “AI has got capacity to amplify the misinformation.”[45]. “It has got a power to deepen the polarization.”[50]. “It has …
S77
Laying the foundations for AI governance — Despite AI and internet technologies being designed to decentralize power, Papandreou observes that power has actually b…
S78
AI Governance Dialogue: Steering the future of AI — The discussion aims to advocate for comprehensive, inclusive AI governance that ensures the benefits of AI are shared gl…
S79
Open Forum #33 Building an International AI Cooperation Ecosystem — Klauweiter argues that since AI governance is a global problem affecting all countries, only the United Nations can prov…
S80
What is it about AI that we need to regulate? — Multiple speakers emphasized that technological challenges transcend national borders and require coordinated internatio…
S81
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Chunggong acknowledges the significant positive potential of AI for social good, including improvements in healthcare de…
S82
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Jason argues that technology by its very nature seeks to transcend national boundaries and connect people worldwide. Thi…
S83
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — There is strong consensus among all speakers on the fundamental principles of inclusive AI governance: the critical impo…
S84
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — And so out of a graduating class of 100, you’d have 30 job creators or people that have created jobs for 30 people and 7…
S85
Comprehensive Report: Preventing Jobless Growth in the Age of AI — And I think a lot of those reasons is that to get the full benefit of AI, it’s not about an AI applied to a task, but it…
S86
Impact of the Rise of Generative AI on Developing Countries | IGF 2023 Town Hall #29 — Sarayu Natarajan:Thank you very much and thank you for that. I largely agree. I think that not just because of the sever…
S87
A Digital Future for All (afternoon sessions) — There is a need to build AI capacity in developing countries to ensure they can participate in and benefit from AI advan…
S88
The open-source gambit: How America plans to outpace AI rivals by democratising tech — A “worker-first AI agenda” is the key social pillar of the Plan. The focus is on helping workers reskill and build capac…
S89
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S90
Democratizing AI: Open foundations and shared resources for global impact — Educational Initiatives and Capacity Building
S91
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — This comment expanded the education discussion beyond formal systems to include organic, curiosity-driven learning. It r…
S92
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S93
Parliamentary Roundtable Safeguarding Democracy in the Digital Age Legislative Priorities and Policy Pathways — The discussion maintained a serious but collaborative tone throughout. It began with formal opening remarks emphasizing …
S94
9821st meeting — But risks are equally huge. This rapid growth is outpacing our ability to govern it, raising fundamental questions about…
S95
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — The tone was consistently optimistic and forward-looking throughout the conversation. Speakers expressed excitement abou…
S96
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S97
Empowering India & the Global South Through AI Literacy — *Note: The transcript contains several sections with audio quality issues and repeated phrases, particularly in some pan…
S98
Empowering India & the Global South Through AI Literacy — The discussion maintained an optimistic and collaborative tone throughout, with panelists sharing positive field experie…
S99
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm and…
S100
Panel Discussion Data Sovereignty India AI Impact Summit — The tone was collaborative and pragmatic throughout, with panelists sharing real-world experiences and solutions rather …
S101
Indias Roadmap to an AGI-Enabled Future — The discussion maintained an optimistic and ambitious tone throughout, with speakers expressing confidence in India’s ab…
S102
WS #278 Digital Solidarity & Rights-Based Capacity Building — The overall tone was collaborative and solution-oriented, with panelists offering constructive ideas and acknowledging c…
S103
WS #231 Address Digital Funding Gaps in the Developing World — The discussion maintained a professional yet candid tone throughout, with speakers demonstrating both concern about curr…
S104
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion maintained a thoughtful, forward-looking tone throughout, characterized by cautious optimism about AI’s p…
S105
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-democracy_-reimagining-governance-in-the-age-of-intelligence — I say this because the theme of this session, AI for Democracy, cuts to the heart of the matter. We are not simply debat…
S106
Global Perspectives on Openness and Trust in AI — Yeah, exactly. Anne, I want to move to you. As the French president’s special envoy for the AI Action Summit, you’ve bee…
S107
https://dig.watch/event/india-ai-impact-summit-2026/impact-the-role-of-ai-how-artificial-intelligence-is-changing-everything — We make systems and making decisions about who receives public services, who qualifies for a loan, or who is flagged for…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Martin Chungong
4 arguments96 words per minute936 words584 seconds
Argument 1
Concentration of AI power threatens democratic institutions
EXPLANATION
Martin warns that a small number of technology corporations are amassing market power that exceeds whole national economies, while the costs of AI are borne by the most vulnerable. This concentration endangers the democratic social contract by allowing a few actors to control systems that affect daily life without public oversight.
EVIDENCE
He cites the rapid accumulation of power in the hands of a handful of corporations whose market capitalizations exceed entire equity markets of major industrialized nations, while workers in the global south receive low wages for data annotation [7-9]. He also references an automated traffic management system in Amsterdam that routed congestion through low-income neighborhoods because the algorithm learned those communities lacked political influence, illustrating how concentrated AI can perpetuate harms [4-6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External analyses note that a handful of tech firms hold market capitalisations larger than whole national economies and that this concentration endangers democratic oversight and the social contract [S1][S2][S24][S5][S13].
MAJOR DISCUSSION POINT
Risk of AI power concentration to democracy
DISAGREED WITH
Om Birla
Argument 2
Parliamentary oversight and transparent debate are needed to regulate AI
EXPLANATION
Martin argues that democratic societies must debate AI trade‑offs openly, decide transparently, and hold actors accountable. Parliaments should lead the formulation of rules that balance innovation with safety, equity, and the public interest.
EVIDENCE
He outlines the need for open debate on trade-offs between innovation and safety, efficiency and equity, profit and public interest, and stresses that in a healthy democracy these decisions are debated openly and transparently [15-16]. He notes that the parliamentary community in Malaysia rejected concentration of power and called for red lines and equal voice for the Global South, underscoring the role of legislatures in AI governance [17-21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for open parliamentary debate, transparent law-making and accountability in AI governance are echoed in discussions about democratic oversight of AI systems [S5][S13].
MAJOR DISCUSSION POINT
Need for democratic oversight of AI
AGREED WITH
Iqbal Dhaliwal
DISAGREED WITH
Om Birla
Argument 3
Global AI governance must be inclusive, involve the Global South, and avoid fragmentation
EXPLANATION
Martin stresses that AI is a border‑less challenge and that fragmented, non‑binding governance risks being overtaken by geopolitical competition. Inclusive participation, especially of Global South nations, is essential to prevent a divided governance landscape.
EVIDENCE
He describes AI as a truly global challenge whose effects transcend borders and notes that current international AI governance is fragmented and short on binding commitments, with geopolitical competition threatening further fracturing [30-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple reports stress the need for inclusive, binding global AI governance that brings in Global South voices to prevent a fragmented regulatory landscape [S26][S27][S28][S29][S30][S33][S34][S35].
MAJOR DISCUSSION POINT
Inclusive global AI governance
AGREED WITH
Dr. Fadi Dao, Dr. Chinmay Pandeya
Argument 4
Summits provide a venue for collective, participatory AI governance commitments
EXPLANATION
Martin asserts that summits like the current one should embody inclusive, participatory approaches and serve as platforms where parliaments can align domestic legislation with emerging international standards, ensuring accountability for commitments made.
EVIDENCE
He states that the summit must embody the inclusive participatory approach demanded by equitable AI governance and that parliaments are pivotal for coherence between national laws, human rights, and international standards, highlighting the summit’s role in fostering collective commitments [38-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Summit plenaries and global dialogues are described as key platforms for collective AI governance commitments and coordination among nations [S30][S33][S34][S35].
MAJOR DISCUSSION POINT
Summits as hubs for AI governance
O
Om Birla
2 arguments113 words per minute1924 words1016 seconds
Argument 1
AI should reflect Vedic and spiritual values to serve society
EXPLANATION
Om emphasizes that AI development should be guided by India’s Vedic and spiritual heritage, ensuring technology serves moral and cultural values rather than purely commercial interests.
EVIDENCE
He repeatedly references the Vedic and spiritual values taught at Dev Sanskriti Vidyalaya, the integration of moral teachings with modern technology, and the intention to bring India’s spiritual culture to the world, stating that AI should be used in line with these traditions [64-68].
MAJOR DISCUSSION POINT
Cultural‑spiritual grounding of AI
DISAGREED WITH
Martin Chungong
Argument 2
Development of a unified digital parliamentary platform to increase transparency and citizen access
EXPLANATION
Om outlines a plan to create a single digital platform that aggregates all parliamentary proceedings, debates, and metadata, making them searchable and accessible to citizens, thereby enhancing transparency and democratic participation.
EVIDENCE
He describes the Digital Assembly that has digitised most parliamentary work, noting that by 2026 all debates and discussions will be on one platform, searchable via AI-driven metadata, which will increase public capacity to engage with legislation [100-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The proposal aligns with calls to digitise all parliamentary work into a single AI-driven searchable platform for greater public access and transparency [S5].
MAJOR DISCUSSION POINT
Digital parliamentary platform
AGREED WITH
Martin Chungong, Dr. Chinmay Pandeya
DISAGREED WITH
Martin Chungong
D
Dr. Fadi Dao
1 argument147 words per minute101 words41 seconds
Argument 1
AI as a tool for social empowerment and democratization of technology
EXPLANATION
Dr. Dao frames AI as a means to empower societies and promote inclusive participation, rather than a tool for manipulation or domination.
EVIDENCE
He states that the purpose of AI is social empowerment and participation of all people, and that the summit’s aim is AI democratization, not domination, reaffirming this commitment on behalf of Globe Ethics [147-149].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Dao’s emphasis on AI democratization and empowerment of people is reflected in external commentary on inclusive AI participation and the need to avoid manipulation [S5][S13].
MAJOR DISCUSSION POINT
AI for social empowerment
AGREED WITH
Martin Chungong, Dr. Chinmay Pandeya
L
Lord Krish Ravel
1 argument126 words per minute115 words54 seconds
Argument 1
Adaptability to rapid AI change is essential to manage public uncertainty
EXPLANATION
Lord Ravel highlights that the speed of AI advancement can unsettle people, and a cultural adaptability to change is needed to contain public uncertainty and maintain confidence.
EVIDENCE
He notes that adaptability to change is a core tenet of his Gayatri Parivar upbringing, and that the rapid pace of AI can make people unsettled, requiring preparedness to manage uncertainty [154-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses highlight the political imperative to contain public uncertainty and adapt to AI’s fast pace, underscoring the need for preparedness [S5][S2].
MAJOR DISCUSSION POINT
Managing uncertainty through adaptability
D
Dr. Chinmay Pandeya
1 argument27 words per minute509 words1121 seconds
Argument 1
Facilitating inclusive, multi‑stakeholder dialogue on AI
EXPLANATION
Dr. Pandeya positions herself as a moderator who brings together diverse voices—from parliamentarians to ethicists—to ensure AI discussions are inclusive and multi‑stakeholder.
EVIDENCE
She opens the session by thanking the speaker and introducing the chief guest, emphasizing the honor of bringing multiple perspectives together, and later invites Dr. Dao to share a concise view, underscoring her role in fostering dialogue [57-58] and [139-146].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports call for meaningful, inclusive multi-stakeholder participation, especially involving under-represented communities and the Global South, in AI governance processes [S27][S33].
MAJOR DISCUSSION POINT
Inclusive multi‑stakeholder AI dialogue
AGREED WITH
Martin Chungong, Dr. Fadi Dao
I
Iqbal Dhaliwal
1 argument196 words per minute1115 words341 seconds
Argument 1
The rapid pace of AI could outstrip labor‑market adjustment; policy must keep up
EXPLANATION
Iqbal warns that AI’s unprecedented speed, low cost, and multimodal capabilities may outpace labor‑market adaptation, and that policy frameworks need to be accelerated to mitigate disruption.
EVIDENCE
He describes AI’s low marginal cost, ubiquitous presence on smartphones, and multimodality, then argues for a “dial-down” of speed and stresses that policy infrastructure must keep pace, noting current biases toward capital investment over labor [287-310].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Dhaliwal’s warning about AI’s speed outpacing policy is supported by analyses of AI’s low marginal cost and rapid diffusion, while other studies note that labour markets have remained stable, indicating a nuanced view of policy lag [S5][S37][S36].
MAJOR DISCUSSION POINT
Policy lag behind AI speed
AGREED WITH
Roopa Purushothaman, Sanjiv Bikhchandani
S
Sanjiv Bikhchandani
3 arguments176 words per minute1223 words416 seconds
Argument 1
Historical evidence shows technology creates jobs; AI has not yet reduced employment
EXPLANATION
Sanjiv points to past technological disruptions that ultimately created more jobs than they destroyed, and observes that, so far, AI has not led to measurable job losses in his company.
EVIDENCE
He notes that there is no evidence of hiring decline despite AI concerns, cites the 1985 introduction of computers in Indian banks where jobs were not lost but productivity rose, and emphasizes that technology can create new opportunities [234-258].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Empirical surveys show that despite rapid AI adoption, there has been no measurable decline in employment, supporting the claim that technology can generate jobs rather than destroy them [S36].
MAJOR DISCUSSION POINT
Technology creates jobs, not destroys
AGREED WITH
Roopa Purushothaman
DISAGREED WITH
Iqbal Dhaliwal
Argument 2
AI enables new products, services, and productivity gains across sectors
EXPLANATION
Sanjiv illustrates how AI is being used to improve investment analysis, accelerate content creation, and automate customer outreach, leading to higher efficiency without job losses.
EVIDENCE
He describes using ChatGPT to supplement investment analysis, creating a film in two days that would have taken weeks and large budget, summarising videos in minutes, and deploying voice bots to serve underserved customers, all of which boost productivity [393-424].
MAJOR DISCUSSION POINT
AI-driven productivity and new offerings
Argument 3
AI tools enhance investment analysis, content creation, and operational efficiency
EXPLANATION
Sanjiv provides concrete examples of AI augmenting his firm’s core functions, from investment research to marketing and customer service, demonstrating tangible efficiency gains.
EVIDENCE
He explains that ChatGPT helps the investment team spot missed perspectives, enables novices to produce a high-impact film in hours, and allows rapid summarisation of podcasts, thereby increasing output without additional staff [393-406].
MAJOR DISCUSSION POINT
AI as efficiency enhancer
AGREED WITH
Iqbal Dhaliwal, Roopa Purushothaman
R
Roopa Purushothaman
2 arguments192 words per minute842 words262 seconds
Argument 1
AI can augment scarce professionals in health, education, and finance, creating millions of new jobs
EXPLANATION
Roopa argues that AI can free up highly trained professionals (doctors, nurses, teachers) from routine tasks, enabling them to serve more people while spawning a new class of mediators and entrepreneurs, potentially creating tens of millions of jobs.
EVIDENCE
She cites shortages of doctors and nurses in India, notes that AI can shift 50 % of doctors’ time to non-specialist work, creating a new workforce and estimating about 30 million jobs across health, education, finance, and logistics, plus entrepreneurship opportunities [262-280].
MAJOR DISCUSSION POINT
AI‑driven job creation in scarce‑skill sectors
AGREED WITH
Sanjiv Bikhchandani
Argument 2
Internal platforms for sharing AI best practices accelerate diffusion across businesses
EXPLANATION
Roopa highlights the need for cross‑company platforms that allow sharing of AI successes and failures, facilitating faster adoption and scaling of effective practices throughout large conglomerates.
EVIDENCE
She describes internal platforms where manufacturing AI safety practices are shared with other units, and mentions broader collaboration across Tata companies to exchange lessons from sectors like life sciences and customer service [347-354].
MAJOR DISCUSSION POINT
Knowledge‑sharing platforms for AI diffusion
AGREED WITH
Iqbal Dhaliwal, Sanjiv Bikhchandani
R
Ronnie Chatterji
1 argument218 words per minute968 words266 seconds
Argument 1
Coordinating panel discussion to explore AI adoption challenges and solutions
EXPLANATION
Ronnie frames his role as moderator who structures the conversation, introduces panelists, poses probing questions, and synthesises insights to surface challenges and potential solutions around AI adoption.
EVIDENCE
He opens the panel by welcoming Iqbal, Rupa, and Sanjeev, outlines the agenda, asks targeted questions about data, education, and enterprise adoption, and later summarises the lack of definitive answers while highlighting key take-aways [189-199] and concluding remarks [443-470].
MAJOR DISCUSSION POINT
Panel as forum for AI adoption discourse
A
Anmol Garg
1 argument118 words per minute109 words54 seconds
Argument 1
OpenAI showcases its initiatives and enthusiasm for expanding AI in India
EXPLANATION
Anmol announces OpenAI’s presence at the event, highlights upcoming showcases from the chief economist and chief of global affairs, and expresses excitement about building AI capabilities together with India.
EVIDENCE
He outlines the lineup of OpenAI speakers, mentions work in education and social impact, and conveys the organization’s eagerness to continue building in India [179-187].
MAJOR DISCUSSION POINT
OpenAI’s India engagement
K
Kavita Gunjikannan
1 argument156 words per minute57 words21 seconds
Argument 1
Announcement of new education partnerships to foster AI learning
EXPLANATION
Kavita celebrates recent education partnerships announced by OpenAI, positioning them as part of the organization’s commitment to capacity building and AI education.
EVIDENCE
She thanks the audience, notes that OpenAI wants to celebrate a few education partnerships announced the previous day, and highlights the role of the Global Affairs team in these initiatives [471-476].
MAJOR DISCUSSION POINT
Education partnerships for AI learning
Agreements
Agreement Points
Democratic/parliamentary oversight and transparency are essential for AI governance
Speakers: Martin Chungong, Om Birla, Dr. Chinmay Pandeya
Parliamentary oversight and transparent debate are needed to regulate AI Development of a unified digital parliamentary platform to increase transparency and citizen access Facilitating inclusive, multi‑stakeholder dialogue on AI
All three speakers stress that AI must be governed openly through parliamentary debate, digital platforms that make proceedings searchable, and inclusive multi-stakeholder dialogue to ensure accountability and public participation [15-16][17-21][100-108][57-58][139-146].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with calls for parliamentary cooperation and transparency in AI policy, as highlighted in the Opening Plenary on Democratic Digital Governance and EU strategy discussions emphasizing openness as a practical necessity [S65][S66][S57].
AI governance should be inclusive and democratized, involving the Global South and diverse stakeholders
Speakers: Martin Chungong, Dr. Fadi Dao, Dr. Chinmay Pandeya
Global AI governance must be inclusive, involve the Global South, and avoid fragmentation AI as a tool for social empowerment and democratization of technology Facilitating inclusive, multi‑stakeholder dialogue on AI
Martin calls for inclusive global governance, Dr. Dao frames AI as a tool for social empowerment and democratization, and Dr. Pandeya emphasizes bringing together diverse voices, showing a shared commitment to inclusive participation [30-38][147-149][57-58][139-146].
POLICY CONTEXT (KNOWLEDGE BASE)
Inclusive, multi-stakeholder governance featuring Global South voices has been a recurring theme at recent summits and IGF sessions, underscoring its importance for equitable AI frameworks [S62][S64][S67][S68].
AI is expected to create new jobs and augment scarce professional resources rather than cause net job loss
Speakers: Sanjiv Bikhchandani, Roopa Purushothaman
Historical evidence shows technology creates jobs; AI has not yet reduced employment AI can augment scarce professionals in health, education, and finance, creating millions of new jobs
Sanjiv points to historical examples and current AI-driven productivity gains showing no job loss, while Roopa highlights AI’s potential to free up professionals and generate tens of millions of jobs in underserved sectors [234-258][393-424][262-280].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses predict net job creation and stress the need for workforce skilling, positioning AI as a job augmentor rather than a destroyer [S51][S52][S53].
Policy and regulatory frameworks must keep pace with the rapid development of AI
Speakers: Martin Chungong, Iqbal Dhaliwal
Parliamentary oversight and transparent debate are needed to regulate AI The rapid pace of AI could outstrip labor‑market adjustment; policy must keep up
Both speakers argue that existing democratic institutions and policy mechanisms need to evolve quickly to match AI’s speed and prevent labour market disruption [15-16][17-21][287-310].
POLICY CONTEXT (KNOWLEDGE BASE)
The structural mismatch between fast-moving AI markets and slower policy cycles is documented in several policy guides, highlighting the urgency of aligning regulation with technological speed [S48][S49][S70].
Building capacity and sharing best practices are crucial for effective AI adoption
Speakers: Iqbal Dhaliwal, Roopa Purushothaman, Sanjiv Bikhchandani
The rapid pace of AI could outstrip labor‑market adjustment; policy must keep up Internal platforms for sharing AI best practices accelerate diffusion across businesses AI tools enhance investment analysis, content creation, and operational efficiency
Iqbal warns about policy lag, Roopa proposes internal knowledge-sharing platforms, and Sanjiv describes how upskilling and AI tools boost productivity, all underscoring the need for capacity development and best-practice diffusion [287-310][347-354][335-338][393-424].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity-building, skills investment and the exchange of best practices are repeatedly cited as essential components of successful AI strategies in ministerial discussions and capacity-development reports [S52][S53][S61][S69].
Similar Viewpoints
Both see AI governance as a means to empower societies and must be inclusive, avoiding domination by a few actors [30-38][147-149].
Speakers: Martin Chungong, Dr. Fadi Dao
Global AI governance must be inclusive, involve the Global South, and avoid fragmentation AI as a tool for social empowerment and democratization of technology
Both stress the central role of parliaments and digital transparency in shaping AI policy [15-16][17-21][100-108].
Speakers: Martin Chungong, Om Birla
Parliamentary oversight and transparent debate are needed to regulate AI Development of a unified digital parliamentary platform to increase transparency and citizen access
Both argue that AI will generate employment opportunities rather than cause widespread job loss [234-258][262-280].
Speakers: Sanjiv Bikhchandani, Roopa Purushothaman
Historical evidence shows technology creates jobs; AI has not yet reduced employment AI can augment scarce professionals in health, education, and finance, creating millions of new jobs
Both highlight personal adaptability and upskilling as key to thriving amid rapid AI change [154-158][335-338].
Speakers: Lord Krish Ravel, Sanjiv Bikhchandani
Adaptability to rapid AI change is essential to manage public uncertainty AI tools enhance investment analysis, content creation, and operational efficiency
Unexpected Consensus
AI should be guided by moral, cultural, and spiritual values as well as democratic principles
Speakers: Om Birla, Martin Chungong
AI should reflect Vedic and spiritual values to serve society Parliamentary oversight and transparent debate are needed to regulate AI
Om Birla frames AI development within India’s Vedic and spiritual heritage, while Martin emphasizes democratic governance and public oversight; both converge on the idea that AI must be anchored in broader ethical and cultural values beyond profit [64-68][15-16][17-21].
POLICY CONTEXT (KNOWLEDGE BASE)
Research linking AI with spirituality and panels noting philosophical divergences provide an authoritative framing for integrating moral and cultural dimensions into AI governance [S59][S58].
Individual adaptability and skill acquisition are as important as institutional policy for AI transition
Speakers: Lord Krish Ravel, Sanjiv Bikhchandani
Adaptability to rapid AI change is essential to manage public uncertainty AI tools enhance investment analysis, content creation, and operational efficiency
Ravel stresses societal adaptability to AI’s speed, while Sanjiv emphasizes personal upskilling (learning AI tools) as a protective strategy, revealing an unexpected alignment between cultural adaptability and individual skill development [154-158][335-338].
POLICY CONTEXT (KNOWLEDGE BASE)
Experts stress that personal upskilling must accompany institutional capacity development to ensure effective AI transition [S53][S51].
Overall Assessment

The panel shows strong convergence on the need for inclusive, transparent, and parliamentary‑driven AI governance, the importance of capacity building, and the belief that AI can create new jobs rather than cause mass unemployment. There is also a shared view that policy must evolve rapidly to keep pace with AI’s speed.

High consensus on governance principles and capacity development, moderate consensus on job creation impacts, and limited but notable consensus on the moral/spiritual framing of AI. This overall alignment suggests a favorable environment for collaborative policy initiatives that combine democratic oversight, inclusive participation, and skill development to harness AI’s benefits while mitigating risks.

Differences
Different Viewpoints
Impact of AI on employment and labour markets
Speakers: Sanjiv Bikhchandani, Iqbal Dhaliwal
Historical evidence shows technology creates jobs; AI has not yet reduced employment The rapid pace of AI could outstrip labour‑market adjustment; policy must keep up
Sanjiv argues that AI has not caused any hiring decline and cites historical examples where technology created more jobs than it destroyed, emphasizing that AI can boost productivity without job loss [234-258]. Iqbal warns that AI’s unprecedented speed, low marginal cost and multimodal capabilities may outpace labour-market adaptation, calling for a “dial-down” of AI deployment speed and faster policy responses to avoid disruption [287-310].
POLICY CONTEXT (KNOWLEDGE BASE)
The employment impact of AI remains contested, with reports highlighting both job creation potential and broader economic disruptions, reflecting ongoing debate among policymakers and scholars [S51][S54][S55][S56].
Approach to AI governance and the role of democratic institutions
Speakers: Martin Chungong, Om Birla
Concentration of AI power threatens democratic institutions Parliamentary oversight and transparent debate are needed to regulate AI AI should reflect Vedic and spiritual values to serve society Development of a unified digital parliamentary platform to increase transparency and citizen access
Martin warns that a handful of corporations amass market power that exceeds whole national economies, concentrating AI control and endangering the democratic social contract, and calls for open parliamentary debate and international inclusive governance [7-16][30-38]. Om emphasizes that AI development should be guided by India’s Vedic and spiritual heritage and proposes a national digital parliamentary platform that aggregates all proceedings for public access, focusing on cultural values and national implementation rather than global regulatory frameworks [64-68][100-108].
POLICY CONTEXT (KNOWLEDGE BASE)
Divergent views on binding versus voluntary frameworks and the balance between parliamentary-centered and multi-level governance have been documented in recent democratic AI governance panels [S57][S58][S65][S66].
Whether AI deployment speed should be slowed down or embraced as inevitable
Speakers: Iqbal Dhaliwal, Sanjiv Bikhchandani
The rapid pace of AI could outstrip labour‑market adjustment; policy must keep up Historical evidence shows technology creates jobs; AI has not yet reduced employment
Iqbal argues that AI’s rapid diffusion, low cost and multimodality risk overwhelming labour markets and urges a deliberate “dial-down” of AI speed together with stronger policy infrastructure [287-310]. Sanjiv counters that AI is relentless and cannot be slowed; instead, individuals should acquire AI skills to remain employable, noting that AI has so far not reduced jobs and can enhance productivity [321-340].
POLICY CONTEXT (KNOWLEDGE BASE)
A tension exists between the desire to decelerate AI for safety and the practical impossibility of doing so, as outlined in coordination guides and analyses of AI’s hard-power dynamics [S48][S50][S70].
Unexpected Differences
Spiritual‑cultural framing of AI versus secular democratic regulation
Speakers: Om Birla, Martin Chungong
AI should reflect Vedic and spiritual values to serve society Concentration of AI power threatens democratic institutions
It is unexpected that two high-level parliamentary figures diverge sharply: Om Birla promotes AI guided by Vedic/spiritual values and a national digital platform, while Martin Chungong warns of AI power concentration undermining democracy and calls for strict parliamentary oversight, showing a clash between cultural-spiritual framing and secular democratic regulation [64-68][7-16].
Whether AI speed can be deliberately slowed down
Speakers: Iqbal Dhaliwal, Sanjiv Bikhchandani
The rapid pace of AI could outstrip labour‑market adjustment; policy must keep up Historical evidence shows technology creates jobs; AI has not yet reduced employment
While Iqbal calls for a deliberate “dial-down” of AI deployment to protect labour markets, Sanjiv asserts that AI is unstoppable and the appropriate response is to upskill workers, a stance not anticipated given their shared focus on labour impacts [287-310][321-340].
POLICY CONTEXT (KNOWLEDGE BASE)
The feasibility of deliberately slowing AI development is questioned in policy literature, which points to market incentives and geopolitical pressures that make intentional deceleration challenging [S48][S50][S70].
Overall Assessment

The panel displayed a mix of consensus on AI’s potential benefits and significant disagreement on its societal impact and governance. Key disputes centered on employment effects, the appropriate pace of AI deployment, and the institutional approach to regulation—whether through parliamentary oversight and inclusive global frameworks or through national cultural‑spiritual initiatives. These divergences highlight the challenge of aligning diverse political, cultural, and economic perspectives on AI policy.

Moderate to high disagreement, especially on employment impacts and governance models, suggesting that without coordinated policy and cross‑sector dialogue, efforts to harness AI for development may remain fragmented and risk undermining democratic and labour outcomes.

Partial Agreements
Both agree that AI should serve democratic and societal empowerment goals – Martin stresses the need for parliamentary oversight and transparent debate to safeguard democracy, while Dr. Dao frames AI as a tool for social empowerment and inclusive participation – but they differ on the mechanisms, with Martin focusing on legislative oversight and global governance and Dr. Dao emphasizing democratization through ethical use [15-16][147-149].
Speakers: Martin Chungong, Dr. Fadi Dao
Parliamentary oversight and transparent debate are needed to regulate AI AI as a tool for social empowerment and democratization of technology
Both seek greater transparency and public participation in AI‑related decision‑making. Martin calls for open parliamentary debate and international inclusive governance, whereas Om proposes a national digital platform that makes all parliamentary proceedings searchable and accessible, aligning with the same democratic objective but via different institutional routes [15-16][100-108].
Speakers: Martin Chungong, Om Birla
Parliamentary oversight and transparent debate are needed to regulate AI Development of a unified digital parliamentary platform to increase transparency and citizen access
Both view AI as a net job‑creator. Roopa highlights AI freeing up doctors, teachers and other scarce professionals, estimating tens of millions of new jobs, while Sanjiv points to historical patterns where technology has generated more employment than it destroyed, noting no current AI‑related job losses in his firm [262-280][234-258].
Speakers: Roopa Purushothaman, Sanjiv Bikhchandani
AI can augment scarce professionals in health, education, and finance, creating millions of new jobs Historical evidence shows technology creates jobs; AI has not yet reduced employment
Takeaways
Key takeaways
AI systems increasingly influence democratic processes and public services; concentration of power in a few corporations threatens democratic governance. Parliamentary oversight, transparent debate, and accountability are essential for responsible AI governance. India envisions AI that reflects Vedic/spiritual values and is building a unified digital parliamentary platform to increase transparency and citizen access by 2026. AI can be a tool for social empowerment, inclusive development, and achievement of Sustainable Development Goals if benefits are deliberately shared. The rapid pace of AI deployment may outstrip labor‑market adjustment; policy frameworks must keep up to protect workers. Historical evidence shows technology creates jobs; AI is already generating new roles in health, education, finance, entrepreneurship, and productivity gains across sectors. Capacity‑building within enterprises (sharing best‑practice platforms) and across parliaments accelerates responsible AI diffusion. International AI governance remains fragmented; inclusive, participatory mechanisms involving the Global South are required to avoid geopolitical fragmentation. OpenAI is positioning itself as a partner for education, certification, and enterprise adoption, announcing new education partnerships and a jobs‑certification platform.
Resolutions and action items
Parliaments worldwide urged to adopt ‘red lines’ for AI, ensure equal voice for the Global South, and engage actively in AI governance at all levels (Martin Chungong). India’s Parliament committed to creating a single digital platform for all legislative proceedings, debates, and metadata‑driven search by 2026 (Om Birla). Globe Ethics pledged to leverage summit outcomes for the 2027 Geneva AI summit (Dr. Fadi Dao). OpenAI announced forthcoming education partnerships and a new jobs‑certification platform to support AI skill development (Anmol Garg, Kavita Gunjikannan). Several parliaments (over 60) are forming cross‑party groups, specialized committees, and capacity‑building initiatives on AI (Martin Chungong).
Unresolved issues
Specific definition and enforcement mechanisms for the AI ‘red lines’ proposed by parliamentarians. How to align the speed of AI innovation with labor‑market policies to prevent displacement, especially in low‑skill sectors. Effective methods for teaching AI skills to marginalized entrepreneurs and workers; scalability of such training programs. Mechanisms to ensure that AI benefits are equitably distributed across regions, languages, and cultures, particularly in the Global South. Resolution of fragmented international governance structures and creation of binding global commitments. Clarification of the role of spiritual/cultural values in AI design without compromising technical standards or human‑rights safeguards.
Suggested compromises
Balancing innovation with safety, efficiency with equity, and profit with public interest through open, transparent trade‑off debates (Martin Chungong). Adopting an inclusive, participatory governance model that gives the Global South an equal voice while allowing market‑driven AI development (Martin Chungong). Recognizing that AI will both disrupt and create jobs; policy should mitigate short‑term displacement while fostering long‑term job creation in new sectors (Roopa Purushothaman, Sanjeev Bikhchandani). Encouraging a ‘dial‑down’ of AI rollout speed through policy alignment, while still allowing rapid adoption for those who can leverage it responsibly (Iqbal Dhaliwal).
Thought Provoking Comments
We make systems and making decisions about who receives public services, who qualifies for a loan, or who is flagged for surveillance… An automated traffic management system inadvertently routed congestion through low‑income neighborhoods because the algorithm had learned that those communities lacked the political influence to object.
Illustrates concrete, real‑world harm caused by algorithmic bias and links technical design choices directly to democratic inequity, moving the debate from abstract risk to a tangible example.
Set the tone for the entire summit, prompting subsequent speakers to frame their contributions in terms of concrete societal impacts rather than purely technical or economic metrics. It led to later discussions about the need for democratic oversight and the role of parliaments.
Speaker: Martin Chungong
Democracy cannot be automated. It must be shaped by every one of us through our democratic institutions, through open debate, through laws made transparently and enforced fairly, and through international cooperation in which every nation can participate.
Provides a clear, normative principle that frames AI governance as a democratic imperative, not just a policy issue, and calls for inclusive participation.
Served as a rallying point for the panel, influencing speakers like Dr. Fadi Dao and Lord Krish Ravel to emphasize inclusivity and the need for collective action, and it anchored the later shift toward practical solutions from a democratic perspective.
Speaker: Martin Chungong
The speed and the pace of AI is unprecedented… I wish I had a dial which could kind of slow things down… we need to make sure that the policy infrastructure keeps up with it.
Highlights the mismatch between rapid AI diffusion and slower policy development, introducing the idea that regulation may need to act as a ‘speed‑control’ mechanism.
Prompted a turning point where the conversation moved from optimism about AI benefits to caution about labor market disruption, influencing Sanjeev Bikhchandani’s later remarks about personal responsibility and Roopa Purushothaman’s discussion of job creation strategies.
Speaker: Iqbal Dhaliwal
AI is now relentless, the genie’s out of the bottle, you can’t dial it down… If you learn AI platforms you’ll be highly employable. It’s in your hands to protect your employment and your employability. Just learn AI.
Shifts the narrative from systemic regulation to individual agency, emphasizing skill acquisition as a pragmatic response to AI disruption.
Changed the tone to a more actionable, personal‑level discussion, leading other panelists to discuss training, capability overhang, and diffusion of best practices within organizations.
Speaker: Sanjeev Bikhchandani
We estimate tens of millions of jobs – about 30 million – could be created in India through AI‑enabled health, education, financial services and logistics, especially by empowering a new class of workers who mediate technology for underserved populations.
Counters the narrative of job loss with a data‑driven projection of massive job creation, linking AI to inclusive economic development and entrepreneurship.
Introduced a hopeful counter‑point that deepened the debate, prompting Iqbal to discuss skill gaps and prompting the panel to explore how to ensure those new jobs are accessible to marginalized groups.
Speaker: Roopa Purushothaman
We provided AI tools (ChatGPT) to micro‑entrepreneurs in Kenya. Top performers leveraged them and saw gains, while lower‑performing entrepreneurs needed more hand‑holding. AI can level up skills but also amplifies existing inequalities if not paired with support.
Provides empirical evidence on AI’s uneven impact, highlighting both its potential for empowerment and the risk of widening gaps, thereby adding nuance to the discussion of AI democratization.
Spurred further conversation about the need for structured training and support mechanisms, influencing Roopa’s remarks on cross‑company knowledge sharing and the OpenAI team’s mention of certification platforms.
Speaker: Iqbal Dhaliwal
There is a big spread between the power users and the median users in most organizations – we call it capability overhang. We need platforms for sharing best practices across business units and even across companies.
Identifies a concrete organizational challenge (capability overhang) and proposes a collaborative solution, moving the discussion from high‑level policy to operational implementation.
Guided the conversation toward practical steps for diffusion of AI expertise, leading to Roopa’s description of internal platforms and cross‑company learning, and setting the stage for OpenAI’s upcoming certification product.
Speaker: Ronnie Chatterji (question) / Roopa Purushothaman (answer)
AI platforms are easy to use, easy to learn… If you learn them you’ll be highly employable. Because not everybody will learn it. If you learn it and are good at it, you’ll be okay.
Reinforces the earlier point about personal agency with a clear call to action, emphasizing the democratizing potential of low barriers to entry.
Echoed and amplified the earlier individual‑skill narrative, reinforcing the panel’s consensus that upskilling is essential, and prompting the audience to consider concrete educational initiatives.
Speaker: Sanjeev Bikhchandani (reiterated)
Overall Assessment

The discussion was anchored by Martin Chungong’s opening framing of AI as a democratic challenge, which established the need for inclusive governance. The conversation then pivoted around two complementary lenses: systemic regulation (highlighted by Iqbal’s call for a ‘dial‑down’ and policy catch‑up) and individual agency (emphasized by Sanjeev and reiterated by multiple speakers). Roopa’s data‑driven optimism about massive job creation and Iqbal’s empirical example from Kenya added nuance, showing both opportunities and risks. The identification of ‘capability overhang’ and the proposal of cross‑organizational learning platforms provided a concrete pathway from high‑level principles to actionable steps. Collectively, these key comments shifted the dialogue from abstract concerns to a balanced view that combines democratic oversight, policy urgency, skill development, and organizational best‑practice sharing, shaping a multifaceted roadmap for responsible AI adoption.

Follow-up Questions
How do you see the way forward from here?
Seeks guidance on the next steps for AI democratization and inclusive governance after hearing the diverse perspectives at the summit.
Speaker: Dr. Chinmay Pandeya (asking Dr. Fadi Dao)
What should India do now for democracy?
Requests concrete recommendations on how India can shape its democratic institutions and policies in response to AI challenges.
Speaker: Dr. Chinmay Pandeya (asking Lord Krish Rawal)
How are you using data, perhaps like signals and other kinds of data sets, to understand how AI is affecting the economy?
Aims to learn about data‑driven methods for tracking AI’s economic impact, especially on the ground.
Speaker: Ronnie Chatterji (to Iqbal Dhaliwal)
What does Tata want with a chief economist? What are the most interesting AI‑related enterprise observations?
Seeks clarification of the chief economist role and insight into AI use cases across Tata’s diverse businesses.
Speaker: Ronnie Chatterji (to Roopa Purushothaman)
How are you thinking about the most important uses of AI? What are you tracking?
Looks for concrete examples of AI applications in a job‑search platform and how impact is being measured.
Speaker: Ronnie Chatterji (to Sanjeev Bikhchandani)
How do you help power users diffuse best practices and learning to others? (Capability overhang)
Addresses the need to spread AI expertise within large organisations and reduce skill gaps.
Speaker: Ronnie Chatterji (to Roopa Purushothaman)
What parallels exist in development literature for teaching AI? Can global institutions teach AI? Can J‑PAL work in that area?
Explores whether existing development‑education frameworks can be adapted to AI skill‑building at scale.
Speaker: Ronnie Chatterji (to Iqbal Dhaliwal)
Investigate the impact of AI‑driven traffic‑management systems on low‑income neighbourhoods and political influence
Calls for research into algorithmic bias that routes congestion to disadvantaged areas, highlighting equity concerns.
Speaker: Martin Chungong
Examine the speed of AI adoption and its effects on labour markets, and develop policy infrastructure to keep pace
Warns that rapid AI diffusion may outstrip labour‑market adjustment, urging study of policy tools to protect workers.
Speaker: Iqbal Dhaliwal
Research potential job creation in healthcare, education, financial services, and logistics through AI‑enabled worker classes in India
Identifies a need to quantify how AI can free specialist time and generate tens of millions of new jobs.
Speaker: Roopa Purushothaman
Study effectiveness of AI tools for micro‑entrepreneurs and BPO workers, including skill‑leveling and productivity gains
References Kenya pilot showing mixed effects; suggests deeper evaluation of AI‑assisted entrepreneurship.
Speaker: Iqbal Dhaliwal
Assess how AI can be integrated into parliamentary processes for metadata search and public access to debates
Describes a national platform for digitising debates; proposes research on usability, transparency, and democratic impact.
Speaker: Om Birla
Develop international AI governance frameworks with binding commitments to avoid fragmented governance
Highlights current fragmentation and geopolitical competition; calls for coordinated, enforceable global standards.
Speaker: Martin Chungong
Explore mechanisms for inclusive participation of the Global South in AI governance and setting red lines
Emphasises the need for equitable voice and concrete limits on AI applications to prevent power concentration.
Speaker: Martin Chungong
Investigate how spiritual and cultural values (e.g., Vedic, Vasudhaiva Kutumbakam) can inform AI ethics and governance
Suggests that India’s moral and spiritual traditions could shape responsible AI frameworks, warranting scholarly study.
Speaker: Om Birla

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Inclusive AI Starts with People Not Just Algorithms

Inclusive AI Starts with People Not Just Algorithms

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by framing the discussion around “scaling human potential” and the rapid growth of the AI Kiran community, which has expanded to about 10,000 members and is expected to multiply further through new partnerships [11]. Kirthiga emphasized a core philosophy of “what would you do if you weren’t afraid,” encouraging risk-taking and starting over in AI even if it means abandoning an existing trajectory [12-20]. She also highlighted that women have often been the only female in senior tech rooms, relying on male allies and seeing this as a privilege and responsibility to give back [25-28].


Lakshmi recounted her career from Intel and venture capital to bringing TED to India and explained that AI Kiran’s purpose is to surface untold stories of innovators and make the AI revolution inclusive from the outset [33-50]. She noted that within six months the initiative had moved from a modest start to a “zero-added” scale of impact, reflecting rapid progress [53-54]. Kirthiga contrasted early ChatGPT results that listed only ten Indian women in AI with AI Kiran’s launch of 250 named women and the current community of 10,000 self-organizing members, illustrating the power of intentional community building [58-64].


Radha described her early work establishing HP’s software operations in Bangalore, celebrating India’s first million-dollar software export in 1989, and later pivoting to AI with centers of excellence in cities such as Kolkata, Vizag, Coimbatore and Shillong covering autonomous mobility, healthcare, automotive and generative AI [102-110][132-149]. She stressed that 53 % of iMerit’s workforce are women and that gender parity is both possible and essential for scaling AI impact [175-179].


When asked about the investments needed for AI, Radha framed them as a triangle of technology, infrastructure and human intelligence, arguing that making data, people and compute “AI-ready” together is the key to scaling [259-267][269-277]. Mihir added that India should focus on applied AI that solves sectoral problems rather than chasing model races, and announced a partnership to train one million women and youth in AI and automation over the next five years [284-296].


The discussion then turned to education, with Anurag emphasizing “heart intelligence” and teaching children intellectual property through music, while other speakers advocated resilience, curiosity and interdisciplinary learning as essential skills for the AI-driven future [235-248][312-324][334-343]. Radha reinforced the importance of curiosity and critical questioning, noting that young learners must be encouraged to explore beyond screens to apply AI in fields like precision agriculture and medical screening [326-332][278-283].


Closing remarks highlighted the speed at which AI tools can be learned, citing a hackathon where 1,200 women from diverse backgrounds built solutions in four hours, and expressed optimism that focused effort can amplify positive outcomes while mitigating risks [411-425]. The panel concluded that coordinated community building, inclusive training, and strategic investment across technology, infrastructure and human talent are essential to realize AI’s potential for scaling human capability worldwide [259-267][284-296][411-425].


Keypoints


Major discussion points


Building an inclusive AI community and scaling human potential – Kirthiga highlighted the rapid growth of the AI Kiran network to 10 000 women and the “add a zero” strategy that expanded the initial 250-woman list to a ten-thousand-member community [61-64]. Lakshmi emphasized programs that bring diverse talent together, such as the Fellows Programme that selects 20 young innovators each year and now counts over 250 alumni [220-224]. Radha noted that women already make up 53 % of her organization, underscoring the goal of gender parity in AI [175-176].


Embracing risk and “starting over” to ride the AI wave – Kirthiga’s recurring mantra “what would you do if you weren’t afraid?” framed the panel’s mindset about taking bold risks and resetting one’s career for AI [12-16]. She advised anyone facing a shift to AI to “start over all over again” rather than settle for a safe, incremental path [17-20]. Radha echoed this by describing how AI is reshaping the IT industry and why early-stage experimentation is essential [115-119].


Investing in AI infrastructure, talent, and a “triangle” of resources – Radha described AI development as a triangle of technology, infrastructure, and human intelligence, arguing that all three must grow together [259-264]. She detailed the creation of AI Centers of Excellence across Tier-2 cities (Calcutta, Vizag, Coimbatore, Shillong) to avoid a geographic divide [133-140]. Mihir added that India’s comparative advantage lies in applying AI to existing industries rather than chasing model-size races, and called for investment in applied AI and inclusive training [284-288].


Redefining education and skill-building for the AI era – Multiple speakers stressed the need to equip youth and under-served groups with AI-ready skills. Lakshmi’s youth-focused Fellows, Anurag’s mobile music school that teaches IP and creativity, and the audience’s question about parenting for an automated future all pointed to a curriculum that blends curiosity, resilience, and interdisciplinary learning [220-226][308-311][316-324]. Radha and Mihir cited concrete up-skilling successes (e.g., 700 women in Africa, 500 women landing jobs after six-week courses) as proof that rapid, inclusive training is possible [433-437][456-459].


Ethical safeguards, AI-divide mitigation, and trust – The panel repeatedly warned that AI must be built “from the ground up” to be inclusive, citing the risk of a future AI divide and the need for early-stage governance [135-136]. Audience members raised concerns about guardrails, provenance, and protecting children from AI-amplified harms, prompting calls for responsible development and transparent trust-building mechanisms [382-401].


Overall purpose / goal


The discussion was designed to showcase how the AI Kiran ecosystem is scaling human potential by creating an inclusive, women- and youth-centric AI community, accelerating infrastructure and talent development in India, and shaping a responsible, equitable AI future. Speakers shared personal journeys, concrete programmatic achievements, and strategic recommendations to inspire participants to “start over” with AI, invest wisely, and embed ethical safeguards.


Tone and its evolution


Opening (0:00-3:00): Highly energetic and inspirational, with personal anecdotes and rallying calls (“what would you do if you weren’t afraid?”) [12-16].


Middle (3:00-20:00): Shifted to a practical and informative tone as speakers detailed community growth, infrastructure rollout, and concrete metrics (e.g., 10 000 women, AI centers) [61-64][133-140].


Later (20:00-45:00): Became reflective and collaborative, addressing audience concerns about education, ethics, and societal impact, while maintaining optimism about rapid up-skilling [308-324][382-401].


Closing (45:00-57:00): Returned to a hopeful, call-to-action tone, emphasizing that “the tools are here, the community is ready, and together we can shape the future” [411-426][428-437].


Overall, the conversation moved from motivational storytelling to concrete strategy and finally to collective responsibility, ending on an upbeat, inclusive note.


Speakers

Mihir Shukla – CEO and Chairman of Automation Anywhere; expert in automation, digital workers, and AI-driven process automation [S1][S2]


Anurag Hoon – Inc. Fellow; founder of Manzil Mystics, music-education initiative for children; focuses on heart intelligence (HI) and intellectual-property education [S3][S4]


Speaker 1 – Unnamed moderator/panelist (appears to lead discussions on AI transformation, infrastructure and human-AI coexistence); expertise in AI strategy and technology adoption [S5][S6][S7]


Kirthiga Reddy – Panelist and AI Kiran community leader; former senior executive at SoftBank (partner) and advocate for women in technology [S8]


Radha Basu – Founder & CEO of iMerit (formerly HP India leader); expertise in AI, computer vision, data labeling, and building AI centers across India [S9]


Lakshmi Pratury – Entrepreneur, former SoftBank partner, founder of Optimize.io, TED India curator; focuses on scaling human potential and storytelling [S11][S12]


Audience – Various participants (e.g., data scientist Anupama, professor Hemendra, AI-company founder, etc.) asking questions on AI education, ethics, and impact


Additional speakers:


Ashna – Mentioned by Kirthiga Reddy as a panelist representing AMD; expertise in hardware and AI acceleration (implied)


Prerna – Referenced at the close as a leader in AI Kiran’s women-youth initiatives; role not detailed


Neha Vaibhav – Same as above; role not detailed


Komal – Cited for work on Dark.ai helping tailors and fashion designers use AI; AI application specialist


Meha – Briefly referenced in discussion about AI Kiran’s journey; role not detailed


Bina – AI Kiran member, ServiceNow employee, advocate for ethical AI; focus on trust and provenance in AI


Anupama – Audience member, data scientist and technical lead in banking/financial AI solutions


Hemendra – Audience member, professor teaching AI and sustainability at IIM Udaipur


Anjali – Audience member, Tech Mahindra representative, works on connecting left-brain and right-brain creativity in technology


Other unnamed audience members – Various professionals raising questions on AI education, equity, and impact


Full session reportComprehensive analysis and detailed insights

Opening & framing – The panel opened with Lakshmi Pratury framing the discussion around “scaling human potential” within the AI Kiran ecosystem and asking how personal potential had been scaled to reach AI Kiran [1-3]. Kirthiga Reddy followed by noting that the AI Kiran community, which began with a few hundred members, has already grown to about 10 000 women[11] and emphasizing the need to “what would you do if you weren’t afraid?” as a mantra for bold, risk-taking career moves [12-16].


Community growth & rapid progress – Kirthiga highlighted that in just six months the initiative moved from a modest start to a “zero-added” scale of impact, underscoring the speed of the community’s expansion [53-54]. She later reiterated the ≈ 10 000-member figure [61-64] and cited the early ChatGPT bias anecdote-a query for 100 Indian women in AI returned only ten names-contrasted with AI Kiran’s launch of 250 named women and its growth to 10 000 self-organising members[58-60][61-64].


Individual journeys


– Lakshmi Pratury recounted her own career reinventions-from Intel and venture capital to bringing TED to India-and described AI Kiran as a platform that makes the AI revolution inclusive from the outset, drawing parallels with past industrial revolutions that left social and environmental scars [37-44][45-50].


– Kirthiga Reddy spoke about often being the only woman in senior tech rooms, relying on male allies, and viewing her position as a privilege and responsibility to give back [25-28]. She also mentioned her startup Optimize Geo, a generative-engine optimisation venture she is showcasing at the summit [300-306].


– Radha Basu provided a historical view of India’s tech evolution, recalling HP’s first million-dollar software export in 1989 and her own pivot to AI [83-110][115-119].


– Mihir Shukla described Automation Anywhere’s platform, which already powers nearly half a billion digital workers across 90 countries and is projected to reach a billion[188-194], and announced a partnership with AI Kiran to train one million women and youth on AI and automation over the next five years [295-296].


– Anurag Hoon shared his mobile music school that teaches children intellectual property, creativity, “heart intelligence” (five senses and nine emotions), and AI concepts [235-246][310-314].


AI Kiran initiatives


– The Fellows Programme selects 20 emerging innovators each year (now over 250 alumni) to foster both creation and consumption of AI knowledge [220-224].


– Community-driven ventures such as Dark.ai, Komal’s fashion-AI startup, were highlighted, with panelists noting the breadth of members’ work and offering an enthusiastic “Awesome” comment [61-64].


AI Centres of Excellence – Radha outlined five regional AI centres: Kolkata, Vizag, Coimbatore, Shillong, and Hubli[150-152]. Each centre focuses on four AI domains-autonomous mobility & robotics, healthcare-medical AI, automotive AI, and generative AI[133-149]. She noted that 53 % of iMerit’s workforce are women, reinforcing her belief that a 50-50 gender split is both possible and essential for scaling AI impact [175-179][180-181].


Investment priorities & ecosystem balance – Radha framed AI development as a triangle of technology, infrastructure, and human intelligence, insisting that all three must grow together to scale AI [259-264]. She warned that we must bridge the AI divide now rather than let it re-emerge [135-136]. Mihir offered a complementary view, urging India to focus on applied AI that solves sector-specific problems instead of chasing the global model-size race [284-288]. The discussion revealed a nuanced split: Radha favoured a balanced ecosystem (models, compute, talent) while Mihir stressed applied solutions.


Audience Q&A highlights


Parenting for an AI-automated world: panelists stressed teaching resilience, the ability to fail, curiosity, and critical thinking [307-311][316-324].


K-12 disruption: calls for integrating EQ and arts-based emotional development alongside technical skills [307-311].


Guard-rails for children: the need for safeguards against AI-amplified harms [382-384].


Building trust and provenance: community-driven verification and transparent data practices were recommended [395-401].


Grassroots digital-gap: highlighted the challenge of up-skilling digitally-excluded populations and the importance of localized training [416-421].


Closing remarks – The moderator cited a hackathon where 1 200 women from diverse backgrounds built AI solutions in four hours, illustrating how quickly AI tools can be learned and applied [416-421]. Speakers expressed optimism that focused effort can amplify positive outcomes while mitigating risks, and called on participants to collaborate with smarter, younger, and more diverse partners to find answers [425-432][460-468].


Recurring themes – Throughout the session three recurring themes emerged: (1) inclusive community building that drives gender parity and expands participation; (2) large-scale capacity development through training programmes, regional AI centres, and strategic partnerships; and (3) application-driven AI that tackles societal challenges while embedding ethical safeguards. The panel concluded with a clear call to leverage existing tools, networks, and optimism to shape an AI-enabled future that is inclusive, responsible, and human-centred[411-425][433-440][449-456][461-468].


Session transcriptComplete transcript of the session
Lakshmi Pratury

to being the first woman partner at SoftBank, you know, investing over a billion dollars, to now an entrepreneurial journey at Optimize .io. Today we are talking about scaling human potential, etc. So how have you scaled your own potential through all this that landed you in AI Kiran? Why did you land in AI Kiran now?

Kirthiga Reddy

All right, yeah, we’ll all get seated since we have all our fellow panelists seated here. So I’m just so excited to be here. And a first call -out to all of the AI Kiran community members here because that is really the story. Amazing. And you come from where? Where are you coming from? Mumbai? Gurgaon? Jamshedpur? Himachal Pradesh. All right. Just a microcosm of the community that’s now officially 10 ,000 but growing to, very quickly, like, multiplying by two or three -fold, and we’ll be, you know, stay tuned for the announcement with partnerships like the one that we have here. So, yeah, my own journey has been one of I guess if I had to think about a theme, it’s my favorite meta poster, which is an offices all across the globe, which says, what would you do if you weren’t afraid?

And it’s a phrase that I want us to think about. You know, what would you do if you weren’t afraid? And think about what comes to your mind. Right. So it’s about, you know, taking risks and not being afraid to start over again. I had someone come up to me yesterday and say, hey, if their business is on a certain trajectory, but now if they have to move over to AI, if it meant starting over all over again, what would I recommend to them? And I said, start over all over again. Right. Because there’s a certain trajectory that if you are in, it is all about projecting where you will be five years from now, 10 years from now.

And if the new trajectory gets you further ahead, you know, by the way, and even if you fail at that, it’s better to shoot for the stars. And miss versus doing a part that. That feels achievable, but there’s not the stretch in it. And of course, with that comes a lot of assumptions about both, you know, the financial ability, the support that you have from your family to do it. But if you have all of that, certainly go out and stretch to that. So that I would say is what what has been my journey and the inspiration for AI Kiran as well, in that all of the different roles that you mentioned, I was often the only woman in the room, certainly the single digit percentage at the max.

And I think all of the women in this room relate. And it has been about incredible male allies who also helped with us getting to the roles that we are in. And so that becomes a position of privilege and responsibility to give it forward. And then that’s when I met Lakshmi, the fearless Lakshmi, who has been a pioneer both in her own reinvention. She was the OG technology 25 technologist. Connected. builder of, real builder, and someone who’s focused on scaling human potential, just like everyone else here. So Lakshmi, tell us your story, and I can’t wait to hear the stories

Lakshmi Pratury

of everyone else here on this panel. Yeah, so, you know, for me, when I sit here today in 2026, in 1994, 93, 94, we were talking about internet is going to be big, and people would say, okay, what is this? You know, how will anybody make any money in this, etc. So what we are looking at now is nothing new. This is just a reinvention of things we’ve been seeing for the last 50 years, you know. So I’ve been at Intel, I’ve been a venture capitalist, I’ve been, you know, in philanthropy, all kinds of things. But what brought us together to AI Kiran is that for the last 15 years, my work has been about finding amazing people, doing amazing work and get them to tell their stories, because we only hear about the 10 famous people, but innovation is happening everywhere.

So how do you find them, connect them, and get them to tell their stories and teach people how to tell their stories? So I brought TED to India. I mean, I worked at Intel, venture capital, all kinds of stuff. And then I decided for the last 15 years, my journey is going to be how to create a platform to showcase the amazing talent that’s there in India and across the globe that doesn’t get told. So that’s what I’ve been doing. So when I met Kritika and I look at in the AI revolution, there is an amazing opportunity for us to do it right from the beginning. In every revolution, industrial revolution, we messed up the environment, the rivers and everything.

200 years later, we are like, okay, let’s clean it up. Even in the Internet revolution, you know, we have the problems with social media, you know, mental illnesses, all kinds of things, good and bad. But technology is growing. It’s great, actually. You can’t fight it. So how can we be part of this? to make it inclusive from the word get -go. That’s what excites us in this journey. And as she was saying, we have no idea how to do this. You kind of say, we are going to do this. And it’s amazing, like in six months, the kind of progress we had. As she said, you know, one of the things

Kirthiga Reddy

you must say, Kritika, about the chat GPT thing. Yeah, you know, I shared this when we started AI Kiran. And by the way, this is the most you’re going to hear from both of us because the rest of the session, we are going to hear from our incredible panelists. And when we started, if you went to chat GPT and said, can you tell me about 100 women in AI in India, it would tell you 10 women, right? And it will tell you that I cannot answer this question. And these are some sources that you look at. And so over that period of time, so we launched with 250 named women. So right there, we added a zero. Now it’s an incredible community of, you know, 10 ,000 women who are all taking this on their own, rallying it, self -organizing.

And so we’re going to talk a little bit about that. creating new ventures. We just heard about Dark .ai, which Komal is doing that and helping tailors and fashion designers use AI. And you have to Awesome, right? So if we can create a platform to even add a little bit of like that oomph and make it bigger, faster, bolder, I mean, we have done our jobs. And so, yeah, so we have already added two zeros to the first number that ChatGPT had. It’s just about, you know, as they say in a startup, add a zero, add a zero, and we’ll be at that million. So with that, let’s jump in.

Lakshmi Pratury

So talking about scaling human potential, I have to start the conversation with Radha. I’ve known Radha… I mean, I knew Radha about reading about her first in Silicon Valley. She was one of the first people… people who brought… HP to India in 1987 before anybody thought of the technology corridor. So we used to read about her. And then she started iSupport, which is a software company, which is one of the first unicorns in Bay Area. I was still reading about her. And then through a common friend, Chitra, I met her and been a great friend for the last 25 years. And talking about somebody who reinvents herself all the time to benefit the community that’s been in her, whether it is HP or whether it’s iSupport.

And now she does something called iMerit. And Radha, before, instead of me saying what you do, the kind of work that you do in scaling humanity, the humans in the loop, has been amazing. So tell us about how many people you have, what you’re doing, and what does scaling really mean for

Radha Basu

you? Actually, indeed. I said, indeed. you know I’ve had quite a journey together and sorry by the way I have to say how you know when you say Hewlett and Packard she actually worked with them yes so Lakshmi really dates me as well but that’s okay I work with Andy Grohwal so let’s really date ourselves I grew up in HP I’m originally from Madras and that’s where I did my engineering and you talk about being a woman in the room it was we were 17 girls and 2800 boys in engineering and I really had the opportunity and I went to get my masters in the US and just kind of fell into working at HP labs which was one of the most prestigious places then and the beauty of HP was David Packard particularly and I was really literally did the management by wandering around.

You would run into him everywhere, open offices, and ended up being a mentor. I was so fortunate. The two of them created Silicon Valley, the Silicon Valley we talk about today. And then when I had the opportunity by sheer, I ended up in Europe, ran medical products group for HP in Europe. And then at that point, I was like, okay, I’m going back to the U .S. What am I going to be doing? And there’s this whole thing about what is happening in that country of yours. It’s so behind in electronics, the country of mine, India. And I was so kind of enraged by that comment. I said, that’s rubbish. We have the best mathematicians, all of which is true.

So David Packard said, okay. I’ll give you three months. Go and figure out what you can do in India. And I tell you, it was the greatest opportunity because I ended up in this beautiful garden, sleepy town called Bangalore. And growing up in Chennai was, of course, I’d gone to Bangalore for my holidays. And the talent of, you know, when you’re working with computers, I used to actually at that time, I was working on multi -threaded Unix. And the kind of talent and what you could develop anyway, it wasn’t three months. That extended to about five and a half years. And I set up Hewlett Packard in Bangalore. And the first two multinationals of anybody doing software in India was Texas Instruments and HP.

The other most amazing thing at that time, I think even more amazing, was those were the years that TCS started. Infineon. HP started. Wipro started. TCS started. And I was just a little bit more experienced. And so we celebrated together the first million dollars of software export from India jointly. And I remember doing that million in 1989. I bring this up because if you, if when, within a lifetime when you can see an industry, technologies completely transform a large country with a growing middle class. And it’s just the creation. I mean, there is no question that IT is the global leader is in India. There is no question about that. So now you fast forward and you come to AI.

AI in turn is changing IT. It’s changing IT in ways that we never believed. It was even possible. And I think that so we started IT. And we’re still doing it. And we’re still doing it. it. It will be 10 years in AI in April. So thinking, and that’s what’s fun about being in the IT industry early on. Then you say, well, what’s happening to this industry? Multi, multi, multi billion dollars. And we started in AI. So we’ve kind of had a ringside seat. It’s also you go through when you start something early enough, you go through its issues. We have done a lot of work in computer vision, not just the language model side, but the computer vision side.

And I’ll talk a little bit about it. But at this point, we have, and many of them are AI Kiran folks. We have about little over 10 ,000 people working in AI. 3 ,500 or so in India. and our in -house talent in India working in AI is about a lot of, most of them are in -house. And we also decided to set up AI centers, not in the metros, because remember what transformed India was the work in Bangalore, Noida, Gurgaon, Chennai, et cetera. How do you then take it and use AI now, it’s the new technologies, but not have a divide? The last thing I want is, you know, like five or ten years from now, we’re all discussing how do we bridge the AI divide, as we’ve been doing, how do you bridge the digital divide?

If we could bridge it now, which is what I love about AI, Kiran, then you are in charge and you grow and you scale. So we started setting up. We started setting up centers in Calcutta, Vizag. Coimbatore, Hubli, Shillong. And so those are our centers. And each center now has become a center of excellence in a particular area. And the three areas that we work in, it was wonderful to hear this yesterday. We work four areas, actually. We work in autonomous mobility and robotics, which is our largest business. And there we have people in Kolkata and Meche, Bruce. And then Vizag is the center of excellence for healthcare medical AI. Coimbatore, and not because I’m Tamil, this is the first thing I’ve done in Tamil Nadu, but Coimbatore is our first center of excellence in Asia, automotive AI center of excellence.

And the way that center has grown, you know. And then our generative AI work is primarily in Kolkata. Shillong, it’s in multiple places. and we work with the large foundation model companies. So then what do you do? You can focus and focus and focus on the large models. How do you take that into applications for precision agriculture, breast cancer screening, healthcare AI, into the different areas that are so critical for societal applications? So we work with the foundation model companies to create what are called small models, small vision models, small language models, and then fine -tuning those models and working in reinforced learning with human feedback, red teaming, that means collaborating with the models, what we call tormenting the models, because how do you find out whether this model works or not?

You torment the darn thing. You torment it till you break it. That’s an actual technical term, let me tell you. You torment it till you break it. Okay? And then you do the data set creation to make it right. So then we will bring in experts. So we have on as scholars, we call them globally, and you can’t just do this in English, PhDs in mathematics, cardiologists, radiologists. They’re called scholars. Interventional, something, something. I found out more about this. Agronomists. Agronomists in Germany. Agronomists in the U .S. Agronomists. And this is where the beauty of AI and this potential. It’s not potential for me anymore. It’s real. It’s a 10 -year -old company. We run a fairly large business.

It’s cash positive, earnings positive. It’s got all of that stuff. But it’s business that drives the inclusion. And bringing in these experts means. You not only are inventing technology in Silicon Valley, Bangalore, et cetera, you have a global set of people and experts who are contributing to AI. So for me, let me end by one thing, which I know is most important. It’s 53 % women. 53 % women. And I’ll say one thing. I believe in this, really. If anybody asks you, how do you run a company with 50 -50 women, look them straight in the eye and say, have you seen the world lately? It’s about

Kirthiga Reddy

50 -50. And there should be no reason why AI technology cannot be 50 -50. Thank you. Beautiful. Well, that sets the stage beautifully for maybe the next question to me here. Where you’re a new author. and this is a topic that you spend a lot of

Mihir Shukla

I applaud the vision because as Radha rightly said the idea of getting it at the beginning is the right idea and I’m a big fan the book that you referred to is called a five year century isn’t available it will be available soon available for pre -order because we are going to see the change what happens normally in a hundred years within the next five years now fortunately we have seen this kind of change at least once before in the recent past around 1900 when electricity radio, automobile and planes came in about the same 10 -15 years time frame, imagine you went to sleep and you woke up 10, 15 years later, it would look like a different planet in every way possible.

It worked out for the most part. So we are about to see all of that now in five years. And in my role as a CEO and chairman of Automation Anywhere, we see the world through a very unique lens. We have about nearly half a billion digital workers that are powered by AI today run on our platform. It will reach a billion soon. The human worker to a digital worker ratio is 1 to 20. There’s 20 digital workers for every. And it is happening across 90 countries. We have customers in 90 countries across every industry. So when we saw all of this, we decided to write a book and say, it is time to tell the world what is happening, what is coming, and what is the leader.

playbook looks like.

Kirthiga Reddy

Incredible. And maybe, Ashna, we’ll go to you. You represent another iconic company, AMD, that has been at the heart of this revolution. As we think about scaling human potential and as we think about the opportunity globally and in India, what do you feel are the limiting factors or the enablers? Is it talent? Is it capital? Is it compute? How do they interact?

Speaker 1

You know, just to build off what both Radha and Meha said, there is no debate about the transformational nature of what we’re all experiencing. But change inherently is always personal. And what happens when you see either people who are super positive or super negative about something, it’s because they’re internalizing what they think it means to them or what it means around them. And so when you talk about… infrastructure or assets or how the world is shifting, these are all going to happen. I mean, we have, as a generation, relied exclusively and maybe I would say not exclusively, but at least extensively on human intelligence. Human intelligence has done the most brilliant things, but human intelligence has also made way for artificial intelligence.

And it’s about a model of coexistence as this develops that has to be evolved. And so from an infrastructure perspective, when we think about it, our goal is to make sure that the innovation that we as humans have ambition for is fully supported in what we build and what we deliver for the world. That’s simply put. How that ambition gets realized ultimately is in our hands, right? And so I think that’s where we have the ability to shape it early. We have the ability to drive success. and we have the ability to learn from history. Now, I will say that having been a history buff growing up, you always want people to learn from history, but people never do.

And so you should just expect that this is going to be the most interesting time that we are living and experiencing. And I would encourage and challenge everyone to make the most of that challenge of what it means to them personally and how you drive it. I mean, that’s my personal view. We will continue as a company to go build the best technology out there to support and drive and be the best partner to the businesses we work with. But ultimately, it comes down to the ambition each of us sets, each of the corporations set, each of the organizations set on the kind of change you want to drive.

Lakshmi Pratury

No, I think it’s beautifully put. I think it’s very different people coming together, people in hardware, software, services, training, all kinds of things coming together. And that’s why for us, at AI Kiran bringing very diverse forces is important and one of the biggest things for us is youth. When we look at AI Kiran, we say what can we do to get women into the fold and what do we do with youth and how do we make sure it’s safe for youth and it furthers the knowledge. It’s not just consumption but creation. So I think we have a program called Fellows Programs. Every year we pick 20 amazing people from different disciplines and put them together and I always say that that’s a great way to adopt children without carrying them.

So we have over 250 of them and Anurag is one of our Inc. fellows and he runs something called Manzil Mystics and the reason we wanted to bring this perspective is working with youth and creativity is extremely important. So Anurag, you’ve been in the music field for a long time and you’ve been in the music field for a long time and right now you’re working with over… 60 ,000 children across 900 schools teaching music. They have taken on one thing. They said, we’re going to teach music. And you actually have a van that you take to different schools and teach them. And tell me a little bit about, you teach things like intellectual property and human rights, all kinds of these things through music.

So tell us about that journey a little bit.

Anurag Hoon

Yeah, and thanks, Lakshmi. Lakshmi, I remember eight years ago, like seeing this dream with you. And she used to see my eyes glittering when I used to talk about this mobile music school. And it’s a reality now. And what I would say, for me, it’s less about AI. It’s more about HI, heart intelligence, because we’re talking about human. And what makes us human is a heart pumping and making us alive. And the heart is learning all the time. And so, and we. Thank you. and i personally saw like my story um i grew up in a low -income family in delhi studied in a government school got 52 hence no college started learning music and i in within a year i started my band i was in the u .s in seattle learning marketing and sales and how did it happen um it happened because music helped us learn that but because i create my original songs based on ideas of kabirji and gandhiji and first thing was what if someone steal my work and this this thing was there like what what if someone steals my idea and and it was very easy for anyone to just take your idea or maybe take your song and sing in a movie or a stage so for us one of the first thing that we teach you how to sing write compose and perform a song it it is very important for us that we may that that they don’t lose their property with their create.

Also, they don’t steal anyone’s property, which was there. And I was doing that, translating some English song, putting some Hindi lyrics and making myself cool. But I was like, no, that’s, create AI is there to help us learn things. And that’s what we made sure that every time we go in a classroom and a child learns to write a song or compose a song, they must know that intellectual property is a thing. And it is a big career opportunity right now. All the streaming platform have made sure that people who are like me creating a song get the royalty. But if I create a song through AI, they don’t get money. So we always say, if you want to be, if it’s fun, if you want, then do like create song through AI.

but the streaming all streaming platforms is you cannot earn money so we always say if you want

Kirthiga Reddy

to earn money create a song on your own absolutely well I mean one is so amazing to see the diversity that’s represented on this panel and you know start thinking of your questions we’re going to do one more prepared question but then we want to hear what’s on your mind as well and so let’s bring us back maybe to this historic India AI Summit and I know you know many people many announcements being made by the panelists and you know our attendees here as well on my end certainly you’ll see a number of AI announcements has already started we’ll turn to be here in a few minutes to talk about that as well so there’s AI announcements there’s also my startup Optimize Geo where having helped brands with a move to mobile and social and being relevant there Optimize Geo and I have to say in India it’s generative engine optimization it is not JIO it’s Geo we are helping brands with being relevant because business decisions consumer decisions are being made off of questions being asked by chat GPT perplexity and the like so that’s the platform and we have a bunch of announcements there but Ashna maybe coming to you we are sitting here at this Historic India AI Summit and if access to advanced chips is going to determine who can build powerful models are we at

Speaker 1

service intelligence and complement it with artificial intelligence. And that complementing of artificial intelligence is about how do you then have a thoughtful strategy as a country, as a company, as a startup to build that compute layer and that compute investment structure that gives you the outcomes that you need that complement the human ambition and the human scale you want to achieve. So I think it’s both. I don’t believe it will be a limiting factor for those that want to move fast. You just have to be creative with the resources you have. And believe me, we’re building fast and as quickly as we can to meet all the demand that we have. So Radha, continuing on that about being faster here than anywhere else, you’ve been doing that for 10 years before AI was a fashionable word.

What kind of investments do you think are needed to move people up the value chain of AI in India? And maybe we’ll have to comment on the same because I think it’s a really important question.

Radha Basu

Right. So if you look at the three parts of investments in AI, think of it as a triangle, right? And we’ve heard a lot of this. It’s the AI, let’s call it the technologies, the models, all the stuff, the open AI, the anthropics, the Google, deep minds, et cetera. Then there’s all the infrastructure, and you’ve heard all the announcements, multi -billion dollars of infrastructure. And then there is AI intelligence, and that is the human intelligence. And it’s the nexus of the technology intelligence, the infrastructure to do this, and the human intelligence that really scales AI. So yes, should we be worried about AI taking away jobs? I think we should. But to me, it’s not taking away jobs.

It’s how the jobs are evolving. So you ask me, what are the investments needed? And this is where I think, really, I mean, yesterday, I felt so, I feel very hopeful with young people anyway. Average age of our company is 24 .5. You would never know that looking at me. And the sassiest people in my company would say, if it was not for you, it would be 23. I’ve actually, and that’s that sassy young people from all over India, not from the cities, necessarily, not the IITians. So what is changing in AI? You can take young people from a variety of different backgrounds. And we had some young people come into the iMerit booth and say, we’re in commerce, or we’re in something else.

How do we become AI people? that transformation or that it’s like an equation how do you take a large number of young people and you i don’t want to use the word skilling but they become ai ready so you want to make data ai ready you want to make young people ai ready and you want to make the infra ai ready when you do all those three things the daunting scales the second thing i would say is and this came out quite a bit in the discussions yesterday whether it was daria speaking from anthropic or definitely you know from google sundar talked about this what are the applications of ai that is where we are today we’ve got the large models and of course they are scaling how do you apply it to the big picture and how do you apply it to the big picture and how do you apply it to the precision agriculture because if you do and you can you can actually catch crop failure We work with people like John Deere, and you catch the crop failure in an area like this.

You’ve saved the entire field from just, and they have seen immense amount of production increases because of that. If you can look at breast cancer screening for women, and you can screen people all over, breast cancer in India for Indian women versus Asian versus Caucasian versus black, the smaller models are very different because the parameters are different. If you can use that, then you’re starting to get AI into societal applications and then into enterprise AI because that’s where the big business, any technology scales and gets adapted and adopted when the enterprise. Start to use them, accounting, legal, et cetera. So that is the investment that’s needed. handing it over to

Mihir Shukla

you Mihir I think I’ll cover it in two different dimensions the first is focus on applied AI so I think it is best especially for India it’s good to develop the models but not to blindly chase this model race that’s happening in western world because if you look at the history the printing press was invented in Germany Dutch used it and became a superpower for few hundred years industrial revolution was invented France has all the parameters to succeed a small island of England used industrial revolution in every aspect of the economy became Great Britain making a point that you don’t have to invent a technology the success lies in applying that technology in every aspect of economy and that is India’s superpower if it focuses on is on it.

You have 18 or different industrial hubs in each of them, very specifically applying applied AI like I heard automotive AI, right? Can you create global competitiveness with those models? That’s where the primary investment has to go and it can completely change the economic outlook. The second thing where investment needs to go is an inclusion. I think this is a remarkable technology that can include a vast amount of people that were previously not included in digital economy. Think about the first computers with an English keyboard. You can’t include 90 % of the world population in that interface. Then came the mobile phone that got a little easier. Now you have a technology where anybody can talk, easily participate. So this is the time to include everybody.

And one of the things AI Kiran and we are doing together is we announced a partnership where together we will, in the next five years, we will train a million women and youth on AI and automation together. And I think both sides have to happen. We need to make economic growth and we have to make sure we

Kirthiga Reddy

And tell us a little bit about the wonderful work you do as you also ask your question.

Audience

Hi, my name is Anupama. I am one of AI Kiran members. Professionally, I’m a data scientist. Now moved to a technical lead role where I’m actually helping a lot of banking and financial institutions come up with AI and automation solutions. So I work a lot in building up POCs, use cases, and at the end of them, building up strategies for them to come up with enterprise solutions for them. My question is basically to Anurag you, and this is a little bit of a personal question. We’re talking about AI, and we’re talking about scaling. We’re talking about a new world. My question is, what is it that we as parents should be teaching our kids for the next 15 years when the world is fully automated?

They are already in an AI age because they’re growing in an AI age. They see AI. They hear, do, and in fact, they’re doing everything right. So what are those skills that we as parents should be teaching our kids to be ready? For the next 15 years and, yeah, in the AI automated age. what is it that we should be you know thinking through at this point i mean of course skills will come at a later stage but what is it that we what are what are those blind spots that we don’t see as parents and as people or as professionals who are just busy in building up solutions who are busy in automating things but there’s a human factor out there right uh after 15 years what is it that is going to make them stay

Anurag Hoon

that’s a brilliant question our og mama is here so she can answer better um i guess i when i became an ink fellow i became a father also my son is eight years old i am an ink fellow for eight years uh for before that two years i learned to be a father uh then i became a father so i feel the more onus is on us um i put this is my very personal i feel the five senses um and then nine emotions Navras and India has a lot of literature I guess as a parent I just made sure that my son understand the five senses and nine emotions and then I plan everything according to that it’s not like I don’t want to give phone to my son or I don’t want to do this or that so my son does all those things one of the things he started doing was birthday celebration going on street or maybe going somewhere and making him sense all these things so I guess five emotions, nine emotions five senses if we know it we can

Speaker 1

so basically you are saying that we need to be consistent with the five emotions the five senses and nine emotions like and ensure that this is there at the same time parallelly where we are even automating and we are bringing AI. And you should send us that list of five senses and nine emotions and we’ll share it with the community as well. I mean this is maybe I feel fairly strongly about this. I think we need to teach all our kids resilience. They need to learn to fail. They need to know it’s okay to fail. They need to know life is not easy. They need to know life is unfair. And they need to learn to survive and thrive and learn to be happy in it.

So if they have resilience they will survive all these changes. And I think that’s where you see kids struggle because when kids don’t know how to be resilient that’s when they struggle. When you teach them early and you know you’re there as a support mechanism for them to go through those experiences they will learn. And especially with the change we’re about to experience. even we don’t know I mean we can sit here and speculate as a panel what the world is going to look like in 15 years we don’t know

Radha Basu

that’s something that’s really great you know it can add one thing to this oh yeah it is so she didn’t ask the question she said as parents I’m gonna talk as a grandparent okay hey age comes with this right and it wasn’t so much them asking me it was my asking them and this is my niece and I said Nikki so what is it you think you’ve learned at school he’s a junior it’ll say he’s an 11th and just one more year to go I said what do you think we should be teaching kids at school and he says party and I said what do you think we should be teaching kids at school and he says look I don’t think there is anything you can really teach us at this point because AI is beyond all the parents which I actually agree with and we started talking about it and the thing that came out is being a curious learner because he said and I’m repeating to you not what I said to him but what he said to me he said whatever I learned last year I wanted to go into computer science at Stanford it was my biggest dream hey that’s not going to get me a job and I have to do different things I have to know what’s going on resilience I totally agree I loved your answer to it because it gave a completely from the heart and from the brain and from nature but this thing of knowing what’s around you learning about it, being curious about it because that is really important in AI.

You keep pinging it to make it better, right? And this is part of the intelligence. So thinking critically, curious learners, none of this is going to happen by keeping phones away from them. They’re going to learn to be curious because they want to be and go out in nature and learn it, right? Or wherever. And go out into the field. If you want to work in precision ag and a kid from the city doesn’t know anything about it, or in medical, whatever the thing. So to me, the answers

Mihir Shukla

I was going to quickly say there are three elements, in my opinion. The first is, in my generation, we only studied one subject, like computer engineering. I think today, people are going to do multiple things, right? So combining. And when you combine, the possibilities are limited. less. People who study rock climbing and video game design and when that person creates a rock climbing video game, it will be the most authentic experience you could ever get, right? So things like that, neuroscience and medical, there are just unimaginable possibilities. So that is on the career side, how you progress. I think second thing I asked my daughter, just like you said, you asked them and she said, that stayed with me, she said that the future is that there are no worker bees.

There are only queen bees. I loved it. I loved the spirit of it. You know, the empowerment it embodies, the ambition it embodies and I think if everybody had that mindset, amazing things are possible.

Kirthiga Reddy

You know, I think what we are going to do is actually we are just going to ask the question we are going to combine the questions. So just we’re going to hear a bunch of questions and then the panelists can just pick whichever they want and what they want to do. Okay, question.

Mihir Shukla

Sorry, I remembered the third one. I think the third thing is the power of the question. So this amazing thing AI that we have, it knows many answers, but it doesn’t know what are the right questions. And it is not likely to know anytime soon. And so having the right questions, I share this example with you on our family dining table. I do this experiment to learn from the younger generation because they instinctively know what the future is, right? They know better than us. So we made a rule once and said, you know, we’re going to talk about a subject and you can’t tell me anything that Alexa, Siri, or ChatGPT would tell me. So they said, okay, bring it on, Dad.

I said, okay. So we talked about some bird with the longest wingspan or something. And they said, Dad, that’s not fair. The question itself is, bias towards chart GPT. Ask me something that is, they said it nicer than this, but they said, ask me something that is worthy of human. So I said, okay. So I said, the Patagonia has the ecological imbalance after all the industrial development. In order to restore it back, it’s a complex thing. How do you introduce new species and complex problem and so how do you bring it back to its original state? That dinner conversation went on for three days. Various tools were used and together as a family, we came out with a plan on if you were to do it, what would that look like?

Now imagine those conversations happening on every dining table, if we have the right questions.

Kirthiga Reddy

Amazing. All right. Just quick spattering of questions and then we’ll have the panel just take yes, yeah, question.

Audience

Hi, my name is I’m founder of an AI company. We work with global higher education institutions. So I actually led my life very differently when I went to IIT. I never studied. And I look back and I say I did the best thing. So my question right now is that with the disruption that we are seeing and pretty much like IQ is gone by AI, the only thing that we are left with is EQ. And if you look at the education system and the system here, we actually hold on to something. Like we hold on to some exam, some college, and then some company. Then eventually we grow. So I want to hear from you guys how would you like to now disrupt at the fundamental levels, which is K -12, going into higher education, in order to support what is happening.

Disrupting education at fundamental levels. Hi, I’m Hemendra. I teach AI and sustainability at IIM Udaipur. And quick question. We. A lot of countries are now banning multimedia because the harm is obvious, because we didn’t have the guardrails and things came along. AI is only going to amplify the harms, just like it’s going to amplify the power. We heard about the power, and we know maybe AI can be used. The guardrails that we need for the younger generation is just connected to the question, follow -up to him, is what do we need to protect our kids from the harm while we give them the power? Thank you so much. This is Anjali. I represent Tech Mahindra.

I have been doing throughout my career, you know, connecting the left brain with the right brain, which is creative in technology. So hence, you know, AI is very close to my heart as well. So, ma ‘am, the question that I have for you is basically today in the world, people are getting overwhelmed and confused. There are so many platforms. coming over, so many methods coming over, right? If somebody has to ideate and strategize what to learn first and what next, right? How to go about it? Because so much is happening. There’s a lot of panic around Last question. Bina, AI Kiran member, part of ServiceNow, also a woman for ethical AI movement. How do we build trust in the internet?

Earlier you had provenance via people curating the internet. Now people are turning to ChatGPT and trusting those answers over human answers. How do you build provenance? That’s the main question. Last question, sorry. I do grassroots training with people for AI, but their digital gap is still very high. I have to go back and teach them tech before I can teach them AI. How do you solve this? Because we are having a lot of conversations around the youth and the power of the question, etc. All of you are fermented with wisdom. the application of AI and what you bring to the lens, how do you crunch that wisdom cycle of the youth? Because at the end of the day it is information age.

There’s so much information coming. So facts, figures, information, you’re brilliant at. But how do you build that wisdom to infer, to ask the right questions and to interpret? Can you crunch that wisdom?

Speaker 1

So we’re going to give like 30 seconds to each of the panelists as they close. I mean, I think on learning you just start. There are incredible tools. I mean, it’s amazing how quickly the tools and the capabilities to learn in this space have developed. And how fast you can learn. I mean, if any of you were paying attention to the series of events, TCS did a hackathon where they got women from all over the country. This was 1 ,200 women, right? Non -native, they’re not English speakers. They come from different walks of life. And in four hours, they were able to do brilliant things with the training that they received. I mean, these are the kinds of things.

So the potential is massive. So I think anybody, and that’s the beauty of what we have in AI and the positive side, is you can just take it and run with it and learn. I’m an optimist. So I believe you can always find what doesn’t work. But I think if more of us focus on what works, we can do a lot more good a lot faster than what can go wrong. This is the only panel where nobody has asked us about impact on jobs. So I’ll leave it at that.

Mihir Shukla

I’ll take the aspect of how do you teach them digital skills and then the AI skills. I think there is an opportunity here to skip some of the old digital skills because they weren’t very friendly. I think that in the new… In the new world, we have seen… I’ll give you a few examples. We train… We trained 700 women in Africa for six weeks, and 500 found a job within a week. We trained people in Mississippi Delta, which is a very poor part of the U .S. There was somebody flipping burgers at $12 an hour. Six weeks later, they had a $120 ,000 job in AI. This is a technology that doesn’t require you two or four years training courses, which normally the people who are needed the most can’t afford to have it.

So the fact that you can provide this kind of mobility on this technology is arguably the best thing about this technology.

Anurag Hoon

I guess for EQ, education disruption, EQ, to work on EQ is arts, and that’s a disruption. I’m glad AI came, and hence Invest India just shared that the 3 trillion Indian rupees, it’s going to be that’s the market size, because… for media and entertainment sector because the world has recognized that if you want EQ to be developed, invest in arts. Live music classes are increasing drastically and that’s where I’m here. We can talk more about this.

Radha Basu

So for me, this is not a question for the future. 72 % of the people who work at iMerit come from very low -income backgrounds and come from the hinterland of India. So it is absolutely possible to be able to… You start as though they haven’t been skilled before at all. You know, AI is new. There’s no baggage. It’s wonderful. As Mihir said, there’s a whole new way of doing it. And they learn and they are… They use AI to learn and what they learn, they teach AI. It’s really… It’s really interesting to watch. You have young women in a place called Meteor Bruce who came from tailoring and those kind of backgrounds. Why are they a center of excellence for computer vision?

Because the focus on that single pixel level accuracy is something that they have, and they learn the other skills. I would finish by saying you also have a foundation called Anudip, and this is really working across. And 630 ,000 young men and women, 50 % being women, have been skilled, have been skilled in AI literacy. Literacy, just knowing what is AI, how do you use it. I know it doesn’t address the IRM part, but I’m just talking about at the grassroots level. And when they learn how to work with AI, it’s not an enigma. It’s harder. I would say the last thing I would say, and I love my industry colleagues. it’s harder to take somebody from industry and train them in AI because they have to unlearn first than to take a young person out of some place in Odisha who comes from a second tier or even villages and be able to skill them in AI as to how to be working with AI so it’s the nexus of AI technology and that human and I’m such a believer that we can do this here.

Kirthiga Reddy

Awesome, so with that let’s give a huge round of applause for our panelists one, two if you want to get involved in the AI moment around women, youth, differently able, we have incredible leaders here, Prerna, Neha Vaibhav, they are truly the heart and soul of making this happen so let’s hear it for them, please ask them their questions and feel free to come grab any one of us if your questions didn’t get answered. thank you I just want to say end up with saying that if you can work with people smarter than you if you can work with people half your age and twice as smart I think we’ll find all the answers and thank you so much for giving your time, thank you you

Related ResourcesKnowledge base sources related to the discussion topics (34)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“AI Kiran community has grown to about 10 000 women who are self‑organising and creating new ventures.”

The knowledge base states that there is an “incredible community of … 10,000 women who are all taking this on their own, rallying it, self-organizing” confirming the reported size and nature of the community [S102].

Additional Contextmedium

“The AI Kiran community is self‑organising and actively launching new ventures such as Dark.ai, a fashion‑AI startup by Komal.”

Beyond confirming the size, the source adds that members are “creating new ventures” and specifically mentions Dark.ai and Komal’s work helping tailors and fashion designers, providing additional detail on community activity [S102].

Confirmedmedium

“Dark.ai, a fashion‑AI startup founded by Komal, was highlighted as a community‑driven venture.”

The knowledge base explicitly references Dark.ai and Komal’s fashion-AI initiative, confirming its mention in the discussion [S102].

External Sources (109)
S1
Comprehensive Report: “Factories That Think” Panel Discussion — – Mihir Shukla- Thani Ahmed Al Zeyoudi
S2
Inclusive AI Starts with People Not Just Algorithms — – Mihir Shukla- Anurag Hoon – Radha Basu- Mihir Shukla
S3
Inclusive AI Starts with People Not Just Algorithms — Agreed with:Anurag Hoon, Radha Basu — Children need fundamental life skills and emotional intelligence rather than just …
S4
Inclusive AI Starts with People Not Just Algorithms — – Mihir Shukla- Anurag Hoon
S5
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S6
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S7
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S8
Inclusive AI Starts with People Not Just Algorithms — – Kirthiga Reddy- Lakshmi Pratury
S9
Inclusive AI Starts with People Not Just Algorithms — – Lakshmi Pratury- Kirthiga Reddy- Radha Basu- Mihir Shukla – Mihir Shukla- Radha Basu – Mihir Shukla- Radha Basu- Spe…
S10
Inclusive AI Starts with People Not Just Algorithms — Speakers:Anurag Hoon, Speaker 1, Radha Basu Speakers:Kirthiga Reddy, Lakshmi Pratury, Radha Basu Speakers:Kirthiga Red…
S11
Inclusive AI Starts with People Not Just Algorithms — – Kirthiga Reddy- Lakshmi Pratury
S12
Inclusive AI Starts with People Not Just Algorithms — Lakshmi Pratury complemented this perspective by drawing parallels to the early internet era of 1993-94, when people que…
S13
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S14
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S15
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S16
Scaling AI for Billions_ Building Digital Public Infrastructure — First of all, thank you. I’ll probably add some context to whatever I’ve heard so far. So first of all, my views is any …
S17
https://dig.watch/event/india-ai-impact-summit-2026/inclusive-ai-starts-with-people-not-just-algorithms — 50 -50. And there should be no reason why AI technology cannot be 50 -50. Thank you. Beautiful. Well, that sets the stag…
S18
Skilling and Education in AI — Thank you so much. Thank you for inviting me. I’m a former executive member of NCBT. One minute about NCBT. NCBT is a re…
S19
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — Our session’s aim is to demonstrate the positive uses of AI policy and technology from an institutional perspective. Las…
S20
Multistakeholder Partnerships for Thriving AI Ecosystems — Well, thank you for mentioning the concrete action because that’s actually what really it is all about. We were coming u…
S21
AI for equality: Bridging the innovation gap — Blair provides a concrete example of how partnerships with governments can amplify impact beyond what individual organiz…
S22
Automation Anywhere introduces advanced AI solutions — The automation industry isevolvingbeyond Robotic Process Automation (RPA) to embrace multi-agentic AI systems capable of…
S23
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Singh explains that India’s AI strategy deliberately avoids the race for ever-larger models and instead focuses on pract…
S24
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — India focuses on smaller models for specific use cases rather than chasing trillion-parameter models
S25
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — A well-structured redistribution of unpaid childcare and housework could help strengthen women’s participation in the di…
S26
For the record: AI, creativity, and the future of music — ## Audience Engagement ## Ongoing Challenges ## Opening Context and Personal Perspectives ### Copyright Protection as…
S27
Building Inclusive Societies with AI — Discussion point:Startup Ecosystem and Innovation
S28
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — – Kristalina Georgieva – Kristalina Georgieva- Khalid Al-Falih Economic | Development | Infrastructure Five layers id…
S30
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Evidence:The participant mentions ‘closing the gap between commitments and capacity’ and notes that ‘this is where the r…
S31
Driving Indias AI Future Growth Innovation and Impact — Dr. Vivek Mohindra from Dell Technologies presented a comprehensive AI blueprint built upon three foundational pillars d…
S32
Future-Ready Education: Enhancing Accessibility & Building | IGF 2023 — In the analysis, the speakers highlight the importance of future education being skills-oriented to prepare students for…
S33
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Capacity development | Artificial intelligence Talent development, education and future skills
S34
Upskilling for the AI era: Education’s next revolution — ## Overview and Context Doreen Bogdan Martin: Good afternoon, ladies and gentlemen. Yesterday morning on this very stag…
S35
World Economic Forum Town Hall on AI Ethics and Trust — Both speakers emphasize that risk assessment and mitigation must precede trust discussions. Botsman argues society jumpe…
S36
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — “as we deploy AI through a platform -centric approach where you’ve built the necessary guardrails, I think that those ri…
S37
Open Forum #33 Building an International AI Cooperation Ecosystem — Children rights | Privacy and data protection | Online education Ethical Considerations and Inclusivity
S38
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — The UN High Commissioner for Human Rights argues that AI systems should advance human rights by design, requiring alloca…
S39
Open Forum #54 Advancing Lesothos Digital Transformation Policies — Both speakers identified digital skills development as a fundamental challenge requiring targeted interventions. They ag…
S40
WS #65 Gender Prioritization through Responsible Digital Governance — Capacity building and skills training programs are effective in promoting digital inclusion for women. These programs ca…
S41
Contents — Beyond school and university-level education, a range of opportunities are currently available to workers looking to ite…
S42
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Development | Infrastructure | Economic Development | Infrastructure | Sociocultural We need to have national digital …
S43
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion reveals moderate disagreements primarily around implementation approaches rather than fundamental goals. …
S44
How AI Drives Innovation and Economic Growth — Appropriate technology solutions for developing countries Zutt advocates for a focus on ‘small AI’ rather than large-sc…
S45
How AI Drives Innovation and Economic Growth — Summary:The speakers show broad agreement on AI’s transformative potential for development but significant disagreements…
S46
Panel Discussion: 01 — Minister Lawson offered a nuanced perspective, noting that whilst Africa represents less than 1% of global AI talent and…
S47
Inclusive AI Starts with People Not Just Algorithms — Evidence:He explains his approach with his 8-year-old son: ‘I feel the five senses and then nine emotions Navras and Ind…
S48
Inclusive AI Starts with People Not Just Algorithms — A significant portion of the discussion addressed preparing the next generation for an AI-driven world. Panelists stress…
S49
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — Summary:Dr. Sarabjot emphasizes critical thinking, questioning AI outputs, and understanding AI limitations as the prima…
S50
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — UNESCO is providing policy guidance on AI in education, focusing on frameworks that emphasize ethical use of AI in educa…
S51
AI (and) education: Convergences between Chinese and European pedagogical practices — This observation deepened the conversation by introducing the concept that educational struggle has intrinsic value. It …
S52
The National Education Association approves AI policy to guide educators — The US National Education Association (NEA) Representative Assembly (RA) delegates haveapprovedthe NEA’s first policy st…
S53
AI adoption reshapes UK scale-up hiring policy framework — AI adoption is prompting UK scale-ups torecalibrateworkforce policies. Survey data indicates that 33% of founders antici…
S54
AI-generated content and IP rights: Challenges and policy considerations — Ownership of IP rights for AI-generated content is a complex issue. Traditional IP laws typically attribute inventorship…
S55
What is it about AI that we need to regulate? — A key distinction emerged around technical versus broader governance issues. InWorkshop 344 on WSIS+20 Technical Layer, …
S56
The Battle for Chips — Additionally, the need for careful and strategic actions is emphasized. Overall, the analysis provides insights into the…
S57
State of Play: Chips / DAVOS 2025 — 4. Balance Between Mature and Advanced Chips 9. Access to Advanced Manufacturing Capacity Amandeep Singh Gill: but al…
S58
Biology as Consumer Technology — However, it is important to acknowledge that there are risks and downsides associated with AI. The scarcity of technolog…
S59
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Audience:Thank you so much, Dr. Ali Mahmood. I’m from Pakistan. I’m heading a provincial government entity that is invol…
S60
AI Algorithms and the Future of Global Diplomacy — Krishnakumar advocates for middle powers to focus on solving problems through applications rather than competing in expe…
S61
Technology Regulation and AI Governance Panel Discussion — Competition policy should prioritize enabling market entry rather than punishing market dominance, with government avoid…
S62
Agentic AI in Focus Opportunities Risks and Governance — Arguments:Policy should focus on preventing harm to humans, emphasizing ‘humans before models’ Practical standards and o…
S63
Inclusive AI Starts with People Not Just Algorithms — 1339 words | 176 words per minute | Duration: 455 secondss 50 -50. And there should be no reason why AI technology cann…
S64
Inclusive AI Starts with People Not Just Algorithms — The AI Kiran initiative exemplifies this proactive approach to inclusion through a powerful demonstration of how human a…
S65
Building Inclusive Societies with AI — Discussion point:Startup Ecosystem and Innovation
S66
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — Salman bin Khalifa Al Khalifa This advice was particularly powerful because it directly addressed the paralysis that ca…
S67
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — – Kristalina Georgieva – Kristalina Georgieva- Khalid Al-Falih Economic | Development | Infrastructure Five layers id…
S68
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Evidence:She details Google’s expo featuring AI for education, healthcare, and agriculture, plus the physical infrastruc…
S69
Discussion Report: AI as Foundational Infrastructure – A Conversation Between Laurence Fink and Satya Nadella — Analogy to electricity grids, need for token factories deployed everywhere, investments in chips and infrastructure Inf…
S70
A Framework for Developing a National Artificial Intelligence Strategy Centre for Fourth Industrial Revolution — Any national response to AI technology development must anticipate its potential impact on the current workforce, future…
S71
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Summary:The discussion revealed surprisingly few fundamental disagreements among speakers, with most differences being c…
S72
Upskilling for the AI era: Education’s next revolution — ## Overview and Context Doreen Bogdan Martin: Good afternoon, ladies and gentlemen. Yesterday morning on this very stag…
S73
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S74
Future-Ready Education: Enhancing Accessibility & Building | IGF 2023 — By doing so, an inclusive, equitable, and technologically proficient education system can be fostered, preparing student…
S75
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Capacity development | Artificial intelligence Talent development, education and future skills
S76
World Economic Forum Town Hall on AI Ethics and Trust — Both speakers emphasize that risk assessment and mitigation must precede trust discussions. Botsman argues society jumpe…
S77
Safeguarding Children with Responsible AI — Balancing individual agency development with necessary safety guardrails
S78
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — Aloisia Wörgette: Thank you. Yes, that works. Thank you, Professor Kleinwächter. Dear colleagues, ladies and gentlemen, …
S79
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — The UN High Commissioner for Human Rights argues that AI systems should advance human rights by design, requiring alloca…
S80
Open Forum #64 Women in Games and Apps: Innovation, Creativity and IP — The tone was largely inspirational and optimistic, with speakers sharing personal success stories and emphasizing the pr…
S81
Friday Opening Ceremony: Summit of the Future Action Days — The overall tone was inspirational, hopeful and energetic. Speakers aimed to motivate and empower youth attendees while …
S82
IN CONVERSATION WITH BIRAME SOCK — The tone of the discussion was largely inspirational and encouraging. It began with a focus on Birame’s personal journey…
S83
Friday Closing Ceremony: Summit of the Future Action Days — The tone was energetic, passionate and optimistic throughout. Speakers conveyed a sense of urgency about the need for yo…
S84
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S85
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S86
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S87
AI, Data Governance, and Innovation for Development — The overall tone was optimistic and solution-oriented, with speakers focusing on practical ways to overcome obstacles th…
S88
AI and Data Driving India’s Energy Transformation for Climate Solutions — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s insights rather than…
S89
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S90
Multigenerational Collaboration: Rethinking Work, Learning and Inclusion in the Digital Age — The discussion maintained a professional yet urgent tone throughout, with speakers expressing both optimism about collab…
S91
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S92
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — The discussion maintained an optimistic and collaborative tone throughout, with speakers consistently emphasizing human …
S93
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S94
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stresse…
S95
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S96
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S97
Announcement of New Delhi Frontier AI Commitments — Opening remarks and framing of the event
S98
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S99
From India to the Global South_ Advancing Social Impact with AI — This comment captures the explosive energy and scale of India’s grassroots innovation movement. The specific timeframe (…
S100
From India to the Global South_ Advancing Social Impact with AI — Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. I’d like to welcome everyone to t…
S101
AI Collaboration Across Borders_ India–Israel Innovation Roundtable — The unexpected speed of formal government collaboration, with agreements being reached within six months of initial disc…
S102
https://app.faicon.ai/ai-impact-summit-2026/inclusive-ai-starts-with-people-not-just-algorithms — Now it’s an incredible community of, you know, 10 ,000 women who are all taking this on their own, rallying it, self -or…
S103
AI 2.0 The Future of Learning in India — thank you sir thank you so much for giving me the opportunity I would like to ask few of the I’m seeing a lot of student…
S104
Cyberspace Needs You: Attracting Women to Cybersecurity Careers — Flexibility, flexible working arrangements are important to make women feel valued and that they belong Forums and plat…
S105
Empowering Women Entrepreneurs through Digital Trade and Training ( Global Innovation Forum) — Dami, the founder of Shutlers, emphasises the daily challenges faced as an entrepreneur, highlighting the necessity to f…
S106
Digital democracy and future realities | IGF 2023 WS #476 — Nima Iyer:Thank you so much, Anna-Christina. Thank you for breaking that down, I think that was really helpful. All righ…
S107
Main Session | Dynamic Coalitions — Irina Soeffky: It’s a pleasure to first introduce June Paris, who unfortunately cannot be here today, but luckily she’s …
S108
Hype Cycles and Start-ups — May Habib, the founder of a generative AI company, acknowledges the challenges of competing with established giants such…
S109
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Amit argues that India has evolved from being merely a cost-effective service provider to becoming a center for AI innov…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
K
Kirthiga Reddy
3 arguments176 words per minute1339 words455 seconds
Argument 1
Start over in AI despite fear
EXPLANATION
Kirthiga encourages people to abandon fear and restart their careers in AI, emphasizing that taking risks and starting anew can lead to greater future impact.
EVIDENCE
She asks the audience to consider “what would you do if you weren’t afraid?” and shares a recent conversation where she advised someone to “start over all over again” when moving to AI, highlighting the importance of taking bold steps despite uncertainty [12-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel notes AI’s dual emotions of hope and fear, highlighting the fear construct in AI discussions, which aligns with this argument [S16].
MAJOR DISCUSSION POINT
Embracing risk and career reinvention in AI
AGREED WITH
Lakshmi Pratury
Argument 2
Growth from 250 to 10,000 women, self‑organizing network
EXPLANATION
Kirthiga describes how the AI Kiran community expanded from a modest list of 250 women identified by ChatGPT to a vibrant network of 10,000 self‑organizing members.
EVIDENCE
She notes that an initial ChatGPT query returned only ten women, prompting them to launch with 250 named women, and now the community has grown to 10,000 women who are rallying and self-organizing [58-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Kiran community’s expansion from an initial 250 women to a self-organizing network of 10,000 members is documented in the source material [S3].
MAJOR DISCUSSION POINT
Scaling a women‑focused AI community
AGREED WITH
Mihir Shukla, Radha Basu, Anurag Hoon
Argument 3
Dark.ai enabling tailors and fashion designers with AI
EXPLANATION
Kirthiga highlights Dark.ai as an example of AI applications that empower small‑scale artisans such as tailors and fashion designers.
EVIDENCE
She references the recent work of Dark.ai, which helps tailors and fashion designers leverage AI in their craft [65-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Dark.ai’s work empowering tailors and fashion designers with AI tools is specifically highlighted in the external source [S3].
MAJOR DISCUSSION POINT
AI for grassroots industry empowerment
AGREED WITH
Mihir Shukla, Radha Basu
L
Lakshmi Pratury
2 arguments172 words per minute903 words314 seconds
Argument 1
Reinvention as a path to scaling human potential
EXPLANATION
Lakshmi argues that continual personal reinvention—moving across roles in technology, venture capital, philanthropy, and storytelling—drives the scaling of human potential.
EVIDENCE
She recounts her career trajectory from Intel to venture capital to philanthropy, and explains that for the past 15 years she has focused on finding and showcasing untold talent, bringing TED to India, and building platforms for storytellers [37-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Lakshmi’s 15-year journey of continual reinvention to showcase hidden talent is described in the source [S2].
MAJOR DISCUSSION POINT
Personal reinvention for broader impact
AGREED WITH
Kirthiga Reddy
Argument 2
Platform to surface untold talent across India and globally
EXPLANATION
Lakshmi describes AI Kiran’s mission to discover and amplify the stories of innovators who are not widely known, creating a platform that connects and showcases this hidden talent.
EVIDENCE
She explains that her work over the last 15 years has been about finding amazing people, telling their stories, and teaching others to do the same, positioning AI Kiran as a conduit for untold talent [38-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The creation of a platform to discover and amplify untold innovators across India and worldwide is outlined in [S2].
MAJOR DISCUSSION POINT
Creating visibility for under‑represented innovators
M
Mihir Shukla
4 arguments155 words per minute1199 words463 seconds
Argument 1
Partnership to train a million women and youth on AI
EXPLANATION
Mihir announces a collaboration with AI Kiran to educate a massive cohort of women and young people in AI and automation over the next five years.
EVIDENCE
He states that the partnership will train a million women and youth on AI and automation within five years, underscoring a large-scale skill-building effort [295-296].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multistakeholder partnership models for large-scale AI training are discussed in [S20]; AI-for-equality partnership examples are provided in [S21]; rapid upskilling of 1,200 women in short-duration programs is reported in [S3].
MAJOR DISCUSSION POINT
Massive AI education partnership
AGREED WITH
Kirthiga Reddy, Radha Basu
Argument 2
Automation Anywhere’s billion‑scale digital workers transforming enterprises
EXPLANATION
Mihir highlights the scale of Automation Anywhere’s digital workforce, noting its rapid growth and impact on enterprise productivity worldwide.
EVIDENCE
He reports that Automation Anywhere currently powers nearly half a billion digital workers, expects to reach a billion soon, with a 1:20 human-to-digital worker ratio across 90 countries and industries [188-194].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Automation Anywhere’s current half-billion digital workers and its trajectory toward a billion are detailed in [S22]; the panel summary referencing Mihir Shukla also notes this scale [S1].
MAJOR DISCUSSION POINT
Digital worker proliferation in the enterprise
Argument 3
Focus on applied AI rather than chasing model races
EXPLANATION
Mihir advises that India should prioritize applying AI to solve real problems instead of competing in a global race to develop large foundational models.
EVIDENCE
He argues that India’s strength lies in applying AI across industrial hubs, such as automotive AI, to create global competitiveness, rather than chasing model development for its own sake [284-288].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s strategic emphasis on applied, smaller-footprint AI models over large foundational models is articulated in [S23] and reinforced in [S24].
MAJOR DISCUSSION POINT
Strategic emphasis on applied AI
AGREED WITH
Radha Basu, Kirthiga Reddy
DISAGREED WITH
Radha Basu
Argument 4
Rapid upskilling of women in Africa and the U.S. leading to high‑pay AI jobs
EXPLANATION
Mihir shares success stories of short‑duration training programs that quickly placed women in well‑paid AI positions, demonstrating AI’s potential for rapid economic mobility.
EVIDENCE
He cites training 700 women in Africa (six weeks, 500 employed within a week) and a similar program in the U.S. Mississippi Delta that led to a $120,000 AI job after six weeks of training [433-436].
MAJOR DISCUSSION POINT
Fast‑track AI skill development for underserved women
AGREED WITH
Kirthiga Reddy, Radha Basu, Anurag Hoon
DISAGREED WITH
Anurag Hoon, Speaker 1
A
Anurag Hoon
3 arguments142 words per minute686 words288 seconds
Argument 1
Fellows program and mobile music school teaching IP, AI, and creativity
EXPLANATION
Anurag describes his mobile music school initiative, which combines artistic training with lessons on intellectual property and AI, aiming to foster creativity among youth.
EVIDENCE
He explains that his program teaches children to write, compose, and perform songs while emphasizing intellectual property rights, noting that AI-generated songs currently do not earn royalties, so they encourage original creation [235-244] and further discuss the royalty issue and AI limitations [245-248].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Anurag’s mobile music school that integrates IP, AI and creativity is mentioned in [S2]; broader discussion of AI’s role in music and creativity appears in [S26].
MAJOR DISCUSSION POINT
Integrating arts, IP awareness, and AI education
Argument 2
Teaching the five senses and nine emotions as “heart intelligence”
EXPLANATION
Anurag proposes that nurturing ‘heart intelligence’—the five senses and nine emotions—should be a core part of parenting and education alongside AI adoption.
EVIDENCE
He shares his personal parenting approach of ensuring his son experiences the five senses and nine emotions, using these as a framework for holistic development [310-314].
MAJOR DISCUSSION POINT
Holistic emotional development alongside technology
DISAGREED WITH
Mihir Shukla, Speaker 1
Argument 3
Developing EQ through arts and music as a disruption to education
EXPLANATION
Anurag argues that arts and music are essential for cultivating emotional intelligence (EQ), positioning this as a disruptive force in traditional education systems.
EVIDENCE
He notes that investing in arts and music is recognized as a way to develop EQ, referencing a market size of 3 trillion Indian rupees for the media and entertainment sector and the growing demand for live music classes [438-441].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The disruptive potential of arts-driven EQ development is highlighted in remarks about EQ and arts in [S3]; additional context on AI and music creativity is provided in [S26].
MAJOR DISCUSSION POINT
Arts‑driven EQ development as educational disruption
R
Radha Basu
5 arguments139 words per minute2565 words1099 seconds
Argument 1
Goal of 50 % gender parity in AI companies
EXPLANATION
Radha emphasizes the importance of achieving gender balance in AI organizations, citing current statistics that already approach parity.
EVIDENCE
She states that her company has 53 % women and asserts that AI technology should be 50-50, urging leaders to look at the world and aim for gender parity [175-179] and reinforces the point with a concluding remark about 50-50 being achievable [180-181].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The 50-50 gender parity aspiration is explicitly stated in [S3]; supporting gender-participation data for women in AI initiatives is presented in [S25].
MAJOR DISCUSSION POINT
Gender parity in AI workforce
AGREED WITH
Kirthiga Reddy, Mihir Shukla
Argument 2
AI centers of excellence across Indian cities targeting specific domains
EXPLANATION
Radha outlines the establishment of multiple AI centers of excellence in tier‑2 Indian cities, each focusing on a distinct application area such as autonomous mobility, healthcare, automotive, and generative AI.
EVIDENCE
She lists centers in Calcutta, Vizag, Coimbatore, Hubli, Shillong, describing their domain focus-autonomous mobility and robotics, healthcare AI, automotive AI, and generative AI-highlighting the geographic spread and specialization [138-149].
MAJOR DISCUSSION POINT
Regional AI hubs for domain‑specific innovation
Argument 3
Precision agriculture, breast‑cancer screening, and other high‑impact use cases
EXPLANATION
Radha provides concrete examples of AI applications that deliver societal benefits, including precision agriculture with John Deere and AI‑assisted breast‑cancer screening.
EVIDENCE
She explains how small, fine-tuned models are applied to precision agriculture (e.g., detecting crop failure with John Deere) and to breast-cancer screening across diverse populations, illustrating AI’s real-world impact [276-283].
MAJOR DISCUSSION POINT
High‑impact AI applications in health and agriculture
AGREED WITH
Mihir Shukla, Kirthiga Reddy
Argument 4
Three‑part investment triangle: technology, infrastructure, human intelligence
EXPLANATION
Radha proposes that sustainable AI growth requires balanced investment across three pillars: the technology/models, the supporting infrastructure, and human expertise.
EVIDENCE
She describes the investment triangle, stating that technology, infrastructure, and human intelligence together form the nexus that scales AI, and stresses the need for all three components [259-265].
MAJOR DISCUSSION POINT
Holistic AI investment framework
DISAGREED WITH
Mihir Shukla
Argument 5
iMerit’s AI literacy initiative reaching 630 k youth, 50 % women
EXPLANATION
Radha highlights iMerit’s partnership with the Anudip foundation to deliver AI literacy to a large, gender‑balanced youth audience, demonstrating grassroots capacity building.
EVIDENCE
She mentions that the Anudip foundation has trained 630,000 young men and women-50 % of whom are women-in AI literacy, emphasizing the scale and inclusivity of the effort [454-456].
MAJOR DISCUSSION POINT
Large‑scale AI literacy for youth and women
AGREED WITH
Kirthiga Reddy, Mihir Shukla, Anurag Hoon
S
Speaker 1
3 arguments170 words per minute987 words346 seconds
Argument 1
Need to teach resilience, curiosity, and critical thinking to future generations
EXPLANATION
Speaker 1 stresses that children must be equipped with resilience, the ability to fail, curiosity, and critical thinking to thrive in an AI‑driven future.
EVIDENCE
He lists specific traits-resilience, learning to fail, understanding life’s unfairness, and thriving despite challenges-as essential skills for the next generation [316-324].
MAJOR DISCUSSION POINT
Essential soft skills for AI era youth
AGREED WITH
Audience, Anurag Hoon
DISAGREED WITH
Mihir Shukla, Anurag Hoon
Argument 2
Advanced chips are not a limiting factor if resources are used creatively
EXPLANATION
Speaker 1 argues that access to advanced compute chips will not hinder AI progress, provided organizations are inventive with existing resources.
EVIDENCE
He states that advanced chips are not a limiting factor for fast movers, emphasizing creativity with available resources and noting rapid building to meet demand [252-255].
MAJOR DISCUSSION POINT
Resourceful compute strategy
DISAGREED WITH
Kirthiga Reddy
Argument 3
Resilience and the ability to fail as essential skills for kids
EXPLANATION
Reiterating his earlier point, Speaker 1 underscores that teaching resilience and embracing failure are crucial for children to navigate rapid technological change.
EVIDENCE
He repeats the importance of resilience, learning to fail, and surviving unfair circumstances as foundational traits for future success [316-324].
MAJOR DISCUSSION POINT
Resilience as a core competency
A
Audience
5 arguments164 words per minute825 words300 seconds
Argument 1
What should parents teach kids for the next 15 years in an AI‑automated world?
EXPLANATION
An audience member asks what knowledge and skills parents should impart to children to prepare them for a fully automated, AI‑rich future over the next fifteen years.
EVIDENCE
The question is posed directly, seeking guidance on parental teaching priorities for the AI age [307-311].
MAJOR DISCUSSION POINT
Parental guidance for AI‑era education
AGREED WITH
Speaker 1, Anurag Hoon
Argument 2
How to disrupt K‑12 and higher education to support AI and EQ development?
EXPLANATION
The audience inquires about strategies to transform primary, secondary, and tertiary education systems to better foster AI competencies and emotional intelligence.
EVIDENCE
The question references the need to overhaul K-12 and higher education to support AI and EQ, highlighting systemic disruption [374-379].
MAJOR DISCUSSION POINT
Educational system reform for AI and EQ
Argument 3
How to protect children from AI‑amplified harms while empowering them?
EXPLANATION
An audience member raises concerns about safeguarding children from potential negative impacts of AI while still providing them with empowering opportunities.
EVIDENCE
The question mentions bans on multimedia, the risk of AI amplifying harms, and asks for guardrails to protect younger generations [382-386].
MAJOR DISCUSSION POINT
Child safety and AI guardrails
Argument 4
How to build trust and provenance in an AI‑driven internet?
EXPLANATION
The audience asks how to re‑establish trust and provenance online when users increasingly rely on AI‑generated answers over human‑curated information.
EVIDENCE
The question references the shift from human-curated provenance to reliance on ChatGPT and seeks ways to rebuild trust [397-401].
MAJOR DISCUSSION POINT
Establishing provenance in AI‑mediated information
Argument 5
How to bridge the digital gap before teaching AI at grassroots level?
EXPLANATION
The audience seeks solutions for addressing the digital divide that hinders basic technology adoption, which must be resolved before AI education can be effective.
EVIDENCE
The question highlights the challenge of high digital gaps that require teaching basic tech skills before AI concepts can be introduced [402-405].
MAJOR DISCUSSION POINT
Closing the digital divide prior to AI training
Agreements
Agreement Points
Embracing risk and personal reinvention as a catalyst for scaling AI potential
Speakers: Kirthiga Reddy, Lakshmi Pratury
Start over in AI despite fear Reinvention as a path to scaling human potential
Both speakers argue that overcoming fear and continuously reinventing oneself are essential to scale impact in AI. Kirthiga advises people to “start over all over again” when moving to AI, emphasizing risk-taking [12-18]. Lakshmi describes her career as a series of reinventions that enable scaling of human potential [30-33].
Gender parity and women empowerment in AI
Speakers: Kirthiga Reddy, Radha Basu, Mihir Shukla
Growth from 250 to 10,000 women, self‑organizing network Goal of 50 % gender parity in AI companies Partnership to train a million women and youth on AI
All three emphasize the need to increase women’s participation in AI. Kirthiga notes the community grew from a handful to 10,000 women [58-64]. Radha reports 53 % women in her company and calls for 50-50 parity [175-179][180-181]. Mihir announces a partnership with AI Kiran to train a million women and youth over five years [295-296].
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with digital inclusion policies emphasizing gender-focused capacity building, as highlighted in Open Forum #54 and WS #65 which call for targeted skills programs for women in AI [S39][S40].
Large‑scale capacity development and upskilling of women and youth
Speakers: Kirthiga Reddy, Mihir Shukla, Radha Basu, Anurag Hoon
Growth from 250 to 10,000 women, self‑organizing network Rapid upskilling of women in Africa and the U.S. leading to high‑pay AI jobs iMerit’s AI literacy initiative reaching 630 k youth, 50 % women Fellows program and mobile music school teaching IP, AI, creativity
There is a shared view that scaling training programmes for women and youth is crucial. Kirthiga describes the rapid expansion of the AI Kiran community [58-64]. Mihir shares success stories of six-week trainings that placed women in well-paid AI roles [433-436]. Radha highlights a partnership that has delivered AI literacy to 630,000 young people, half of whom are women [454-456]. Anurag mentions a Fellows programme that has already supported over 250 participants [222-224].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects calls for national digital literacy missions and lifelong learning investments targeting women and youth, noted in reports on digital skills gaps and reskilling strategies [S41][S42][S39].
Prioritising applied AI and concrete societal use cases over model‑centric competition
Speakers: Mihir Shukla, Radha Basu, Kirthiga Reddy
Focus on applied AI rather than chasing model races Precision agriculture, breast‑cancer screening, and other high‑impact use cases Dark.ai enabling tailors and fashion designers with AI
All three stress that AI should be directed toward real-world problems. Mihir argues India should focus on applying AI in industry hubs instead of chasing large model races [284-288]. Radha provides examples such as precision agriculture with John Deere and AI-assisted breast-cancer screening [276-283]. Kirthiga cites Dark.ai as an example of AI empowering small-scale artisans [65-66].
POLICY CONTEXT (KNOWLEDGE BASE)
Mirrors the ‘small AI’ approach advocated for developing contexts and the emphasis on application-driven AI over frontier model races discussed at policy forums [S44][S60][S43].
Importance of resilience, curiosity and emotional skills for the next generation
Speakers: Speaker 1, Audience, Anurag Hoon
Need to teach resilience, curiosity, and critical thinking to future generations What should parents teach kids for the next 15 years in an AI‑automated world? Teaching the five senses and nine emotions as ‘heart intelligence’
Beyond technical expertise, speakers agree that emotional and soft skills are vital. Speaker 1 lists resilience, the ability to fail, curiosity and critical thinking as essential traits [316-324]. An audience member asks what parents should teach children to thrive in an AI-driven future [307-311]. Anurag proposes nurturing ‘heart intelligence’ through the five senses and nine emotions as a foundational approach [310-314].
POLICY CONTEXT (KNOWLEDGE BASE)
Consistent with UNESCO and panel recommendations to embed resilience, curiosity, and emotional intelligence in AI education curricula [S48][S50][S51].
Similar Viewpoints
Both highlight partnership‑driven scaling of AI education for women and youth. Kirthiga describes the community’s rapid expansion as a result of collective effort [58-64], while Mihir announces a formal partnership aimed at training a million women and youth over five years [295-296].
Speakers: Kirthiga Reddy, Mihir Shukla
Growth from 250 to 10,000 women, self‑organizing network Partnership to train a million women and youth on AI
Both see storytelling and creative expression as vehicles for empowerment and for communicating values. Lakshmi speaks about building a platform that discovers and tells the stories of hidden innovators [38-44], while Anurag runs a mobile music school that teaches children to create original songs and understand intellectual property, integrating AI concepts [235-244].
Speakers: Lakshmi Pratury, Anurag Hoon
Platform to surface untold talent across India and globally Fellows program and mobile music school teaching IP, AI, creativity
Unexpected Consensus
Emphasis on emotional intelligence (EQ) and the arts as essential components of AI education
Speakers: Audience, Anurag Hoon
How to disrupt K‑12 and higher education to support AI and EQ development? Developing EQ through arts and music as a disruption to education
While most panelists focused on technical AI deployment, both the audience member and Anurag converged on the need to embed EQ and artistic learning within the education system. The audience explicitly asks how to redesign K-12 and higher education to foster both AI skills and EQ [374-379], and Anurag argues that arts and music are the primary drivers for EQ development, citing a 3-trillion-rupee market for media and entertainment [438-441]. This alignment of a technical AI forum with a strong arts-centric EQ perspective was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Supported by evidence that arts-based and EQ-focused pedagogy are integral to human-centric AI learning frameworks, as discussed in inclusive AI sessions and educational policy briefs [S48][S51].
Overall Assessment

The panel displayed strong consensus on three pillars: (1) gender inclusion and the scaling of women‑focused AI communities; (2) large‑scale capacity building for women and youth through training, fellows programmes and community networks; (3) directing AI toward concrete societal applications rather than pure model competition. There was also a clear, cross‑cutting agreement that soft skills—resilience, curiosity, and emotional intelligence—must accompany technical training. These convergences suggest a unified vision that AI development in India should be inclusive, application‑driven, and human‑centred.

High consensus on inclusion and capacity development, moderate consensus on applied‑AI focus, and emerging consensus on the importance of EQ and soft skills, indicating a cohesive but evolving agenda for scaling human potential through AI.

Differences
Different Viewpoints
Strategic focus of AI development – applied AI for sectoral impact vs. building large‑scale models, infrastructure and human expertise
Speakers: Mihir Shukla, Radha Basu
Focus on applied AI rather than chasing model races Three‑part investment triangle: technology, infrastructure, human intelligence
Mihir argues that India should prioritize applying AI to solve real problems in industrial hubs and avoid a costly race to develop large foundational models, emphasizing applied AI as the main investment priority [284-288]. Radha, by contrast, stresses a balanced investment across technology/models, supporting infrastructure and human expertise, presenting a triangle model that underpins AI scaling and suggests building AI centers of excellence and developing models as part of the strategy [259-265]. The two positions share the goal of advancing AI in India but diverge on whether the primary focus should be on applied sectoral solutions or on developing the broader technology-infrastructure-human ecosystem.
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects the tension between application-first strategies and large-model investments highlighted in debates on AI implementation and competition policy [S44][S60][S43].
Approaches to youth and women capacity building – short intensive up‑skilling programs vs. holistic, arts‑based and emotional development
Speakers: Mihir Shukla, Anurag Hoon, Speaker 1
Rapid upskilling of women in Africa and the U.S. leading to high‑pay AI jobs Teaching the five senses and nine emotions as “heart intelligence” Need to teach resilience, curiosity, and critical thinking to future generations
Mihir highlights fast-track training programmes that can place women in well-paid AI roles within weeks, focusing on technical skill acquisition [433-436]. Anurag proposes a different model centred on ‘heart intelligence’, teaching children five senses and nine emotions alongside creativity and IP awareness, arguing for a more holistic, arts-driven education [310-314]. Speaker 1 stresses the importance of resilience, the ability to fail, curiosity and critical thinking as essential soft skills for the AI era [316-324]. All three agree on the need to prepare the next generation but disagree on the balance between rapid technical up-skilling and broader emotional or resilience-focused development.
POLICY CONTEXT (KNOWLEDGE BASE)
Mirrors divergent views in capacity-building literature, with some reports advocating rapid technical upskilling and others emphasizing holistic, arts-infused curricula for inclusive digital empowerment [S39][S40][S42][S48].
Perceived importance of advanced compute resources – creative use of existing chips vs. need for advanced chip access
Speakers: Speaker 1, Kirthiga Reddy
Advanced chips are not a limiting factor if resources are used creatively Implicit emphasis on partnerships and scaling AI community (e.g., AI Kiran growth) suggesting need for broader infrastructure
Speaker 1 claims that lack of advanced chips will not hinder fast-moving AI actors if they are creative with existing resources, downplaying compute as a bottleneck [252-255]. Kirthiga, while not stating it directly, emphasizes scaling the AI Kiran community and mentions partnerships and announcements that imply the importance of building infrastructure and access to technology for growth [11][64], suggesting a more pronounced need for advanced compute resources. This reflects a subtle disagreement on how critical advanced hardware is for AI progress.
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with ongoing policy discussions on chip scarcity, the battle for advanced manufacturing, and strategies to maximize existing hardware versus securing next-generation chips [S56][S57][S58].
Unexpected Differences
Emphasis on ‘heart intelligence’ (five senses and nine emotions) versus conventional AI skill‑centric education
Speakers: Anurag Hoon, Speaker 1
Teaching the five senses and nine emotions as “heart intelligence” Need to teach resilience, curiosity, and critical thinking to future generations
Anurag introduces a novel, culturally rooted framework of ‘heart intelligence’ that prioritises sensory and emotional development alongside creativity and IP awareness [310-314]. This perspective is unexpected in a panel largely focused on technical AI capacity, where Speaker 1 instead stresses resilience, the ability to fail, and critical thinking as the core soft skills for navigating AI-driven change [316-324]. The divergence highlights an unanticipated debate between a holistic, arts-based developmental model and a more conventional resilience-and-critical-thinking approach.
POLICY CONTEXT (KNOWLEDGE BASE)
Directly referenced in the Inclusive AI discussion highlighting heart intelligence as a pedagogical concept for AI education [S47].
Discussion of AI‑generated content royalty rights versus broader AI governance concerns
Speakers: Anurag Hoon, Mihir Shukla
AI‑generated songs do not earn royalties, emphasizing IP protection for creators Focus on large‑scale AI education partnerships and digital worker proliferation
Anurag raises a specific, unexpected issue about intellectual property rights for AI-generated music, noting that such works currently do not earn royalties and stressing the need to teach children about IP [245-248]. Mihir, by contrast, concentrates on macro-level AI education initiatives and the expansion of digital workers, without addressing IP or royalty concerns [188-194][295-296]. This creates an unforeseen point of divergence between micro-level content rights and macro-level AI ecosystem development.
POLICY CONTEXT (KNOWLEDGE BASE)
Relates to emerging policy analyses on IP rights for AI-generated works and broader governance debates on regulation balance [S54][S55][S59][S62].
Overall Assessment

The panel largely converges on the importance of scaling AI participation, gender parity, and youth empowerment, but diverges on the strategic pathways to achieve these goals. Key disagreements revolve around whether to prioritize applied, sector‑specific AI solutions versus building a comprehensive technology‑infrastructure‑human expertise ecosystem, and on the optimal educational model—rapid technical up‑skilling versus holistic, arts‑based or resilience‑focused development. A subtle tension also exists regarding the necessity of advanced compute resources. These differences suggest that while there is strong consensus on the end‑state (inclusive, scaled AI capacity), the route to get there is contested, which may affect coordination of policies, funding allocations, and program designs.

Moderate – the speakers share overarching objectives but propose distinct, sometimes conflicting, implementation strategies. This could lead to fragmented efforts unless a coordinated framework reconciles applied‑AI focus with broader ecosystem building and aligns technical training with holistic youth development.

Partial Agreements
All three speakers share the goal of expanding AI participation and capacity across India, but differ on the mechanisms: Mihir proposes a large‑scale partnership to train a million women and youth on AI and automation [295-296]; Radha focuses on establishing regional AI centers of excellence that combine technology, infrastructure and human expertise to drive sectoral innovation [138-149]; Lakshmi emphasizes creating a storytelling platform to surface hidden talent and connect innovators [38-44]. The consensus is on scaling inclusion, yet the pathways—mass training, regional hubs, or narrative platforms—are distinct.
Speakers: Mihir Shukla, Radha Basu, Lakshmi Pratury
Partnership to train a million women and youth on AI AI centers of excellence across Indian cities targeting specific domains Platform to surface untold talent across India and globally
Both speakers aim to scale women’s participation and AI capability in India. Kirthiga describes the rapid expansion of a women‑focused AI community from 250 to 10,000 members, highlighting self‑organization and networking [58-64]. Radha outlines the creation of multiple AI centers of excellence across tier‑2 cities to foster domain‑specific innovation and inclusion [138-149]. While both target scaling, Kirthiga focuses on community networking, whereas Radha emphasizes physical regional hubs and domain specialization.
Speakers: Kirthiga Reddy, Radha Basu
Growth from 250 to 10,000 women, self‑organizing network AI centers of excellence across Indian cities targeting specific domains
Takeaways
Key takeaways
Reinventing careers and embracing risk is essential for scaling human potential in AI (Kirthiga Reddy, Lakshmi Pratury). AI Kiran has grown from 250 to 10,000 women, emphasizing self‑organization, gender parity, and a platform for untold talent (Kirthiga Reddy, Lakshmi Pratury, Radha Basu). Targeted AI centers of excellence across Indian cities are driving high‑impact applications such as precision agriculture, breast‑cancer screening, autonomous mobility, and generative AI (Radha Basu). Education and skill‑development programs (Fellows, mobile music school, iMerit AI literacy, Automation Anywhere digital workers) are crucial for inclusion of youth, women, and underserved communities (Anurag Hoon, Mihir Shukla, Radha Basu). Investment in AI should be viewed as a triangle: technology/models, infrastructure/compute, and human intelligence/skill (Radha Basu). Applied AI—using existing models to solve domain‑specific problems—should be prioritized over chasing the global model race (Mihir Shukla). Human qualities such as EQ, resilience, curiosity, and “heart intelligence” (five senses, nine emotions) are seen as complementary to AI and vital for future generations (Anurag Hoon, Speaker 1). Partnerships (e.g., AI Kiran’s pledge to train a million women and youth) and collaborative ecosystems (hardware, software, services, academia) are key enablers for scaling AI responsibly.
Resolutions and action items
AI Kiran will partner with other organizations to train one million women and youth on AI and automation over the next five years (Mihir Shukla). iMerit (Radha Basu) will continue expanding AI centers of excellence in non‑metro cities (Kolkata, Vizag, Coimbatore, Shillong, etc.) to address domain‑specific challenges. AI Kiran’s Fellows program will select ~20 emerging talent each year, now totaling over 250 fellows, to foster creation as well as consumption of AI knowledge (Lakshmi Pratury). Automation Anywhere will scale its digital‑worker platform toward a billion active agents across 90 countries (Mihir Shukla). Commitment to achieve 50 % gender parity in AI companies and teams, with ongoing advocacy and mentorship from panelists (Radha Basu, Kirthiga Reddy).
Unresolved issues
What specific curriculum or set of skills should parents teach their children to thrive in a fully automated, AI‑driven world? (Audience question to Anurag Hoon). How should K‑12 and higher‑education systems be fundamentally re‑designed to support AI literacy, EQ development, and resilience? (Audience question). What concrete guardrails and protective measures are needed to shield children from AI‑amplified harms while still empowering them? (Audience question). How can trust and provenance be established on the internet when users increasingly rely on AI‑generated answers over human‑curated content? (Audience question). What scalable approaches can bridge the digital gap for grassroots learners before introducing AI concepts? (Audience question). Long‑term impact of AI on employment and the exact nature of job transformation remain debated, with no consensus reached.
Suggested compromises
If advanced chip access becomes a bottleneck, innovators should use creative resource‑allocation strategies rather than waiting for ideal compute (Speaker 1). Balancing rapid AI advancement with inclusive growth: pursue aggressive scaling while ensuring gender parity and outreach to underserved communities (Radha Basu, Mihir Shukla). Emphasize applied AI solutions for local industries (e.g., agriculture, healthcare) instead of competing in the global model‑building race, thereby aligning economic goals with societal benefit (Mihir Shukla).
Thought Provoking Comments
What would you do if you weren’t afraid?
Frames the entire conversation around a growth‑mindset and risk‑taking, encouraging participants to imagine possibilities beyond fear‑based constraints.
Set the tone for the panel’s emphasis on starting over, taking bold moves into AI, and later justified the recommendation to ‘start over all over again’ when businesses need to pivot to AI.
Speaker: Kirthiga Reddy
When we asked ChatGPT for 100 women in AI in India it gave us 10. We launched AI Kiran with 250 women and now we have 10,000 – we added two zeros to the first number ChatGPT had.
Highlights a concrete data bias in widely‑used AI tools and demonstrates how the community turned a limitation into a catalyst for building a large, self‑organising network of women in AI.
Shifted the discussion from abstract ideas to a tangible success story, prompting other speakers to talk about scaling, community building, and the importance of representation.
Speaker: Kirthiga Reddy
Every revolution has left an environmental or social scar; we must make AI inclusive from the get‑go rather than trying to fix it later.
Draws a historical parallel that frames AI as a societal responsibility, not just a technological race, urging proactive inclusion.
Steered the conversation toward ethical considerations, prompting Radha and others to discuss decentralised AI centres and gender parity goals.
Speaker: Lakshmi Pratury
We set up AI centres in places like Calcutta, Vizag, Coimbatore, Hubli, Shillong – not just the metros – to avoid an AI divide.
Introduces the strategic idea of geographic decentralisation as a means to democratise AI access, moving the dialogue from urban‑centric narratives.
Led to a deeper discussion on how to scale AI talent across tier‑2/3 cities and the importance of building local centres of excellence.
Speaker: Radha Basu
Investments in AI are a triangle: technology/models, infrastructure, and human intelligence. All three must grow together.
Provides a clear framework for thinking about AI ecosystem development, emphasizing that technology alone is insufficient.
Guided subsequent speakers (Mihir, others) to address skill‑building, infrastructure, and application layers, shaping the later “three‑part investment” theme.
Speaker: Radha Basu
Focus on applied AI, not the model race. Use AI to solve problems in India’s existing industrial hubs – automotive, precision agriculture, healthcare.
Challenges the prevailing hype of building ever‑larger models and redirects attention to practical, impact‑driven AI deployment.
Prompted panelists to cite concrete use‑cases (e.g., precision agriculture, breast‑cancer screening) and reinforced the message of relevance over vanity metrics.
Speaker: Mihir Shukla
AI can include people who were previously excluded from the digital economy – it’s a technology that anyone can talk to.
Frames AI as a social equaliser, expanding the conversation from economic growth to social inclusion.
Supported the narrative of AI Kiran’s mission to train a million women and youth, and resonated with audience questions about education and equity.
Speaker: Mihir Shukla
It’s less about AI and more about HI – heart intelligence. Teach children the five senses and nine emotions so they stay human.
Shifts the focus from technical skill to emotional and sensory development, reminding the audience that human qualities are essential in an AI‑driven world.
Triggered a series of responses on resilience, curiosity, and emotional education, deepening the discussion on what parents should teach their kids.
Speaker: Anurag Hoon
The power of the question: AI can give many answers, but it doesn’t know the right questions. Teaching people to ask better questions is crucial.
Elevates the conversation to a meta‑level, emphasizing critical thinking over tool proficiency.
Inspired the audience and panelists to consider education that cultivates inquiry, linking back to earlier points about curiosity and resilience.
Speaker: Mihir Shukla
72 % of iMerit employees come from low‑income backgrounds; AI literacy can be taught to people with no prior digital skills, turning them into AI practitioners.
Provides empirical evidence that AI democratization works in practice, countering skepticism about skill barriers.
Reinforced the inclusion narrative, gave credibility to the claim that AI can be a vehicle for social mobility, and closed the discussion on grassroots training.
Speaker: Radha Basu
Overall Assessment

The discussion was driven forward by a handful of pivotal remarks that moved the panel from abstract optimism to concrete, actionable ideas. Kirthiga’s opening mindset question and the ChatGPT bias anecdote sparked a shift toward empowerment and representation. Lakshmi’s historical framing and Radha’s decentralisation strategy introduced a responsibility narrative, while the triangular investment model gave the conversation structure. Mihir’s challenges to the model‑race hype and his emphasis on inclusion and questioning redirected focus to practical impact and critical thinking. Anurag’s reminder of ‘heart intelligence’ broadened the scope to emotional education, prompting audience‑driven dialogue on parenting and resilience. Collectively, these comments reframed AI not just as a technology race but as a societal project that requires inclusive talent pipelines, geographic spread, ethical foresight, and a renewed emphasis on human curiosity and questioning. This shaped the panel into a multi‑dimensional conversation about scaling human potential alongside AI.

Follow-up Questions
What are the limiting factors or enablers (talent, capital, compute) that affect scaling human potential in AI, and how do they interact?
Understanding constraints is crucial for shaping strategies and investments in AI ecosystems.
Speaker: Kirthiga Reddy (directed to Ashna)
Will access to advanced chips determine who can build powerful AI models in India?
Chip access could create a competitive advantage and shape the future AI leadership landscape in the country.
Speaker: Kirthiga Reddy (directed to Ashna)
What investments are needed to move people up the AI value chain in India?
Identifying priority investments in talent, infrastructure, and applications will accelerate AI adoption and impact.
Speaker: Kirthiga Reddy (directed to Radha Basu)
What should parents teach their children over the next 15 years to thrive in an AI‑automated world?
Guidance for parenting will help the next generation develop skills and mindsets needed for an AI‑driven future.
Speaker: Anupama (audience member, to Anurag Hoon)
How can K‑12 and higher education be disrupted to support the AI revolution?
Education reform is essential to equip students with relevant AI knowledge and competencies.
Speaker: Founder of AI company (audience)
What guardrails are needed to protect children from AI‑amplified harms while empowering them?
Safety measures are required to mitigate risks of AI misuse on young users.
Speaker: Hemendra (audience)
How can we build trust and provenance on the internet when AI answers are increasingly trusted over human sources?
Addressing misinformation and establishing reliable provenance is vital for informed decision‑making.
Speaker: Bina (audience)
How can grassroots training overcome the digital gap to teach AI effectively?
Finding scalable methods to bridge the digital divide will expand AI literacy among underserved populations.
Speaker: Bina (audience)
How can we create a comprehensive, up‑to‑date map of women AI professionals in India beyond the initial 250 identified?
Accurate data on women in AI will improve representation, networking, and targeted support initiatives.
Speaker: Kirthiga Reddy (implied)
What is the effectiveness of AI centers in tier‑2 cities (e.g., Vizag, Coimbatore, Shillong) as hubs of excellence?
Evaluating regional centers will inform replication strategies and regional development policies.
Speaker: Radha Basu
How do small, domain‑specific AI models (e.g., for precision agriculture, breast cancer screening) compare to large foundation models in terms of performance and impact?
Understanding trade‑offs will guide model selection for localized, high‑impact applications.
Speaker: Radha Basu
What are best practices for AI red‑team/tormenting methodologies to ensure model robustness?
Robust testing frameworks are essential for safe and reliable AI deployment.
Speaker: Radha Basu
What are the long‑term societal effects of AI on job structures and the evolution of work (e.g., shift from worker bees to queen bees)?
Insights will help policymakers and businesses prepare for workforce transformations.
Speaker: Mihir Shukla
How does multidisciplinary education (combining arts, neuroscience, etc.) influence AI innovation?
Cross‑domain expertise may unlock novel AI solutions and creative problem‑solving.
Speaker: Mihir Shukla
What role do resilience, emotional intelligence, and the five senses/emotions framework play in preparing children for an AI future?
Holistic development complements technical skills, fostering adaptable future citizens.
Speaker: Anurag Hoon (and supporting comments from others)
What scalable models for AI literacy training can skip traditional digital‑skill steps, as shown in Africa and U.S. case studies?
Accelerated pathways can rapidly upskill underserved populations for AI jobs.
Speaker: Mihir Shukla
What strategies enable gender parity (50‑50) in AI companies, and how does this impact performance and culture?
Achieving gender balance may enhance innovation, decision‑making, and inclusivity.
Speaker: Radha Basu
How effective is the AI Kiran Fellows Program in fostering youth innovation and entrepreneurship?
Assessing program outcomes will guide future investments in youth talent pipelines.
Speaker: Lakshmi Pratury

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Capacity Building in Digital Health

Capacity Building in Digital Health

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how AI and digital health can reshape the healthcare workforce, emphasizing that a mindset shift is as vital as technology adoption [8][9]. Dr. Rajiv argued that pharmacists, positioned across the retail supply chain, could reach the last mile of care if professional attitudes evolve [8]. Dr. Sarvajit Kaur explained that the nursing regulator integrated AI into the BSc curriculum in 2021, mandating five simulation labs and providing VR and mannequins for skill development [11-14]. Two national simulation centres have trained about 2,000 faculty, and each district is being urged to establish competency centres for hands-on practice [24-27]. These digital-health modules are linked to continuing-education credits and nurse registration renewal through an online platform [31-34]. Dr. Suresh Yadav estimated the global health-worker shortage costs 10-12 million jobs and roughly 15 % of world GDP, with climate change adding further strain [53-55][57-58]. He suggested low-cost digital and AI tools, such as health-ERP systems, to multiply clinician reach and reduce fragmentation, enabling India’s 1.5 billion patients and 20 million diaspora members to be served through a global health ecosystem [78-95]. A tech entrepreneur stressed that health-tech solutions must scale in complexity, citing their EISU platform that adapts from basic monitoring to advanced decision support [112-118][119-120]. He urged tech firms to co-design curricula with bodies like the Academy of Digital Health Sciences to embed practical AI training early in education [122-124]. Dr. Rajiv noted that regulators set minimum curriculum standards but allow added innovation and programming, while Dr. Gupta observed that age does not hinder upskilling, as most trainees are over 20 years old [128-141][219-220]. The session concluded with the launch of a Global AI Academy aimed at fostering continuous learning and a culture of innovation across health professions [226-233].


Keypoints

Major discussion points


Mindset and cultural change are essential for digital health adoption – Dr Rajiv stresses that “the change is happening but… it would take some more time because it’s a professional and mindset change” for pharmacists [6-8]; Dr Gupta echoes that it is “more about mindset change than just technology” [9]; Dr Sarvajit notes that “this has to be a change of mindset” even when expensive equipment is available [22-24]; Dr Suresh adds that the “obvious solution… is the digital solutions” to overcome workforce shortages, but only if professionals adopt them [78-81].


Regulatory and educational reforms are being introduced to embed AI and digital health into health-professional training – The Indian Nursing Council revised the BSc nursing curriculum in 2021, made five simulation labs mandatory, and equipped labs with VR and mannequins [11-14]; it also set computer-to-student ratios and launched faculty-preparedness programmes, training ~2,000 faculty [18-20][24-27]; Dr Rajiv points out that the Pharmacy Council (PCI) allows institutions to go “above the minimum” and add innovation or programming courses [133-140]; Dr Sarvajit describes district-level simulation centres and linking 150 CNE hours to licence renewal [124-130][126-130]; Dr Gupta raises the difficulty of constantly updating curricula and suggests CME/CNE as a flexible alternative [125-128].


Digital and AI technologies are proposed as a “quick-fix” to address severe health-workforce shortages and improve access – Dr Suresh quantifies the global cost of shortages at ~15 % of world GDP [53-55] and highlights climate-health links [57-58]; he envisions AI-driven “one doctor serve 10 people” models, health-ERP integration, and a unified ecosystem to break silos [78-87]; he also describes tele-medicine networks that could connect Indian patients with diaspora doctors and vice-versa, leveraging a “huge opportunity” for India [89-92]; Dr Rajiv cites remote-surgery training as an example of rapid up-skilling [208-212].


Policy, funding, and political engagement are critical bottlenecks; an “innovation pipeline” is needed – Anish argues that politicians must be educated on new technology outcomes and proposes “innovation pipeline management” with stage-gate testing, akin to DARPA, to move ideas from pilots to scalable policies [153-184]; Dr Gupta asks how to “crash-course” politicians [148-152]; Dr Suresh notes visa preferences for nurses as a symptom of global workforce mismatches [59-62].


Technology firms and entrepreneurs must design scalable, capacity-aware solutions and co-create training programmes – Speaker 1 stresses that AI tools should be “scalable in terms of complexity” to match varied digital maturity of institutions, citing their EISU platform that adapts from basic monitoring to advanced decision support [112-118][120-124]; Dr Gupta announces the launch of a Global AI Academy to train the workforce [226-234]; Speaker 2 raises pricing challenges for Indian markets, prompting discussion on free-to-use platforms linked to regulatory credits [188-191][192-193].


Overall purpose / goal of the discussion


The panel aimed to map the current gaps in India’s health-workforce capacity, explore how AI and digital health can bridge those gaps, and outline concrete regulatory, educational, and ecosystem-building actions needed to create a scalable, future-ready healthcare system both nationally and globally.


Tone of the discussion


Opening (0:00-2:00): Cautiously reflective, highlighting structural and mindset barriers.


Mid-session (2:00-13:00): Optimistic and solution-focused, with regulators sharing concrete curriculum reforms and technologists proposing large-scale digital interventions.


Later segment (13:00-25:00): Becomes more urgent and pragmatic, confronting policy inertia, funding constraints, and the need for political education.


Closing (25:00-34:38): Energetic and celebratory, culminating in the launch of the Global AI Academy and a reaffirmation that mindset, not just technology, will drive change.


Overall, the tone shifts from concern to optimism, then to a call-to-action, ending on a high-energy, forward-looking note.


Speakers

Speaker 1


– Role/Title: (not specified; appears to be a moderator or panel participant)


Dr. Rajiv


– Role/Title: Panelist discussing pharmaceutical education and drug inspection; likely a senior pharmacist/industry leader (inferred from transcript)


Speaker 3


– Role/Title: (not specified; contributed a brief comment on a consortium of innovative healthcare universities)


Dr. Gupta


– Role/Title: Moderator, Advisor to the Health Minister, instrumental in defining the ABDM white paper and National Health Policy; involved with Mayo Clinic strategy in India [S10][S11][S12]


Dr. Suresh Yadav


– Role/Title: Executive Director, Commonwealth Secretariat; former advisor to the President of India; expertise in finance, technology innovation, and educational policy [S13][S14][S15]


Speaker 2


– Role/Title: Audience member / questioner (asked detailed questions to Anish)


Anish


– Role/Title: Participant in the discussion; involved in Digital Health Parliament and global leadership on digital health initiatives (likely Anish Shah) [S19][S20]


Dr. Freddy


– Role/Title: Panel participant; raised a question about AI training for senior faculty (no further details)


Dr. Sarvajit Kaur


– Role/Title: Secretary, Indian Nursing Council; represents ~2.2 million nurses; regulator overseeing nursing curriculum and digital health integration [S25]


Additional speakers:


(None identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

The panel opened with Dr Rajiv emphasizing that transforming India’s health-workforce depends more on a professional mindset shift than on simply adding new technologies. He noted that only a small minority of workers voluntarily take on new roles, while most are attracted to manufacturing or R & D for better pay and career prospects, and that the community-pharmacy model has stalled because of entrenched social structures – a change that will only occur through “professional and mindset change” among pharmacists [4-8]. Dr Gupta reinforced this view, describing the discussion as centred on “mindset change rather than just technology” [9].


Dr Sarvajit Kaur then outlined the Indian Nursing Council’s integration of AI and digital health into the basic nursing curriculum. The BSc nursing syllabus was revised in 2021 to embed explicit digital-health competencies, mandate five simulation laboratories equipped with mannequins and VR kits, and introduce a computer-to-student ratio of one computer per five learners together with dedicated computer labs [11-14][18-20][24-27]. To address the shortage of clinical training sites, a faculty-preparedness programme has already trained roughly 2 000 teachers at the Gurgaon National Reference Simulation Centre [24-27].


To extend these competencies to the existing workforce, the regulator has linked 150 CNE (Continuing Nursing Education) hours to nursing-licence renewal, making digital-health courses a prerequisite every five years. An online registration system now integrates these requirements, and districts are being urged to establish competency centres, with pilots already running in Uttar Pradesh and Bihar [31-34][124-130][192-194]. Kaur also highlighted a six-month professional digital nursing course that is being tied to C & E credit requirements, encouraging nurses to acquire digital skills as part of their continuing education [31-34]. Moreover, the Council is developing a one- to two-year specialised digital-health programme under the Digital Health Academy to deepen nurses’ expertise [24-27]. She further pointed to the emerging role of chief technical nurses and called for policies that empower auxiliary nurse midwives (ANMs), community health officers and Arogya Mandir staff to improve patient outcomes across rural and tertiary settings [24-27].


Dr Gupta then asked how drug inspectors and pharmacists are being prepared for AI. Dr Rajiv answered that the Pharmacy Council of India (PCI) prescribes only a minimum set of topics, allowing colleges to freely augment curricula with modules on innovation, management, computer programming and AI, thereby enabling up-skilling without waiting for formal curriculum revisions [133-140].


Dr Gupta raised a practical concern about the speed of formal curriculum revisions, noting that such changes typically occur once a decade and take three years to implement. He argued that continuous professional development-through CME/CNE linked to licence renewal-offers a more agile route to keep health workers current [125-128][142-148].


Dr Suresh Yadav placed India’s challenges in a global context, estimating that the economic cost of the worldwide health-worker shortage is roughly 10-12 % of global GDP, amounting to about 15 % of a $120 trillion economy[53-55]. He also highlighted that the Commonwealth’s agenda gives special attention to the health-system challenges of its small-island member states[133-140]. Yadav linked the shortage to climate change, citing a Lancet report that identifies health systems as major carbon emitters [57-58]. He argued that low-cost digital and AI tools-such as health-ERP platforms that integrate doctors, pharmacists, nurses, volunteers and other health-care workers-can multiply the reach of each clinician (“one doctor serves ten people”) and dissolve the fragmentation that currently plagues India’s health ecosystem [78-87]. He also envisaged a cross-border tele-medicine network enabling Indian patients to consult diaspora doctors and vice-versa, leveraging India’s 1.5 billion domestic population and 20 million overseas Indians [89-92][95-101].


A technology entrepreneur (Speaker 1) echoed the need for products that “scale in complexity, not just volume.” Their EISU platform ranges from basic remote vital-sign monitoring to sophisticated clinical-decision support, adapting to an institution’s digital maturity [112-120]. The speaker urged health-tech firms to co-design hands-on curricula with academies such as the Academy of Digital Health Sciences, thereby shaping the next generation of health workers [122-124].


Dr Freddy counter-pointed with a human-centric capacity-building view, stressing that senior faculty-many lacking AI expertise-must be equipped as “ambassadors” to mentor younger generations, and that mentorship networks are essential alongside scalable products [195-196].


All participants agreed that mindset change is the cornerstone of digital-health adoption; without a growth-oriented attitude, even the most advanced tools will remain under-used [8][9][108-111]. Both Dr Gupta and Dr Freddy concurred that age is not a barrier-success depends on a growth mindset, so training programmes should be age-agnostic[219-220][195-196].


Pricing of digital-health solutions sparked another disagreement. Speaker 2 reported that two US-origin ventures failed to replicate their pricing models in India despite two years of effort and asked for guidance on Indian-specific pricing strategies[187-190]. No panelist provided a concrete answer to this pricing dilemma; the discussion remained focused on ecosystem integration rather than specific Indian-market pricing strategies[187-190][89-92].


On the policy side, Anish described an “innovation-pipeline management” model for government, modeled on DARPA’s stage-gate process: set ambitious targets, solicit entrepreneurial proposals, pilot and validate ideas, then scale successful solutions through policy funding[153-184][148-152].


The session concluded with the launch of the Global AI Academy, positioned as a continuous-learning platform aimed at cultivating a growth mindset across all health-care professions rather than merely delivering a technology solution[226-233].


Key take-aways and actions


– Mindset change is more decisive than technology alone for advancing health-care professions [8][9][108-111].


– Regulators have embedded AI into curricula (e.g., BSc nursing 2021) and mandated simulation labs, VR, and faculty training [11-14][24-27].


– Continuous professional development linked to licence renewal (CME/CNE) is preferred over frequent formal curriculum revisions [125-128][142-148].


– The global health-worker shortage costs ~10-12 % of world GDP, about 15 % of a $120 trillion economy [53-55]; AI and health-ERP systems can multiply individual capacity and reduce fragmentation [78-87].


– Health-tech products must be “scalable in complexity”, offering progressive functionality as institutions mature [112-120].


– Co-creation of hands-on training between tech firms and academies such as the Academy of Digital Health Sciences is essential [122-124].


– The Indian entrepreneurial ecosystem has many ideators but few executors; mentorship “ambassadors” and age-agnostic programmes are needed [195-196].


– Pricing models successful in the US often fail in India; solutions must be tailored to local market dynamics, and the panel did not provide a concrete answer [187-190].


– Politicians require outcome-focused education and an innovation-pipeline framework to adopt and fund new health technologies [153-184][148-152].


– Age is not a barrier; a growth mindset is the critical factor for upskilling senior and junior professionals alike [219-220][195-196].


Unresolved issues


– Defining a scalable platform for mental-health support targeted at Indian health-care professionals.


– Developing Indian-specific pricing strategies that reconcile profitability with widespread accessibility.


– Implementing a systematic training programme for drug inspectors and pharmacists on AI and digital health.


– Operationalising the health-ERP ecosystem and measuring its impact on fragmentation.


– Establishing a concrete roadmap for training politicians and policymakers in emerging technologies.


These conclusions reflect a consensus that cultural and mindset shifts, flexible continuous learning, and ecosystem-aware technology design are the three pillars required to realise AI-enabled health-system transformation in India and beyond.


Session transcriptComplete transcript of the session
Dr. Rajiv

Just by choice, very small fraction would probably take it by choice. Still people want to do jobs in manufacturing or R &D in the pharma companies. So that’s a big factor which we have to solve, which ultimately falls into the remunerations which people get, the future potential of your profession and all that. The community pharmacy in reality has not picked up in this country because of the social structure which we have. Otherwise, the capacity building for anything to do with healthcare, these pharmacists, community pharmacists have to play a very strong role. If you see doctors, nurses, other health technicians, you will find them concerned. They are concentrated in hospitals. But in the society, if you see the spread, the most…

basically the the biggest possibility is for any profession in health care it is for pharmacists through the whole retail chain distribution supply chain management and they are the people who can actually contribute up to the last mile of the value chain so this this needs a strong change management the the change is happening but i think it would take some more time because it’s a professional and mindset change and thinking change for pharmacists

Dr. Gupta

thank you so much i think very important point that it’s more about mindset change than just technology uh dr sarvajit kaur we are very fortunate to have you with us as the secretary of the indian nursing council you represent 2 .2 million nurses and more probably if we account for every registration is three so which is like 10 percent of the world’s nurses how are nurses coping up with the changes in technology with regards to health care and what are you doing at inc

Dr. Sarvajit Kaur

Thank you, Dr. Gupta, for this question and for this opportunity to be here in this esteemed panel. So to answer your question from the regulatory point of view, we have tried to integrate the AI and the digital health into the basic nursing curriculum. We had a change of the BSc nursing curriculum in 2021, and we have started by putting the emphasis on building competencies through the digital health and AI. So five simulation labs have now become mandatory. We have given lab equipments, the list of mannequins, VR, etc., that can be used to build up competencies, because we are also seeing that the clinical facilities that are out there for the nursing students to build up those competencies is becoming limited.

We are having almost 2 .5 lakh nursing students getting passed out. for GNM and BSc, like both getting registered as registered nurses, registered midwife. So we have started from scratch, if I can say so. We have started with computer education. We have given guidelines like for every five students, there should be one computer. We have given computer labs right out there. And we have also worked towards faculty preparedness. So there is, you know, complete adoption, like, you know, the panelists brought out. This has to be a change of mindset. So even if you have these expensive equipments out there, how do you use them and not just keep them in the cupboards, you know, safe as an inventory articles?

So we have started with two national reference simulation centers, one in Gurgaon and the other one just recently opened last two months back in the south, Bhagalkot. And we started with. Faculty preparedness. For the Gurgaon NRSC, we have trained around 2000 faculty on how to use these simulators for each and every nursing student. So what as a regulatory body we are looking is for each and every nursing student to embrace the digital technology as she is working to be a nurse to build up her competencies. And even for in -service, we are linking it up. As you’re aware, with a lot of push from your side, we’ve had this professional digital nursing course of six months, which a lot of takers are there in nursing who are wanting to do this.

But I think we need much more courses like that. We are linking it to C &E hours. We have also brought out our online registration system for the nurses, which again, we are trying to link it with all these. Kinds of opportunities for them. So more nurses benefit out of it. and in the abroad if you see we are having you know these chief technical nurses also now what you know trying to resolve issues like staffing, prevention falls, policies to improve nursing so I think we here also in India need to do a lot in terms of policies to empower every A &M who is working in the rural or every community health officer who’s working in the Arogya Mandir’s or every nurse who is wanting to do better for her patients in the super specialized hospitals there’s a lot more to be done.

Thank you.

Dr. Gupta

Thank you so much it’s very exciting to see how you have moved to bring digital courses to nurses and the offtake for that and I also keep hearing very positive feedback on this opportunity for nurses. Thank you so much. Now I move to Dr. Suresh Yadav who I’ve known as someone who not just ideas the future but creates the future so working with the President of India whether he went to World Bank whether he’s in Commonwealth even in Commonwealth years back you put the agenda of AI as a high priority. What is your work and role today at Commonwealth’s vision for the 56 member nations and more so for the small island states?

Dr. Suresh Yadav

Thank you. Thank you, Professor Gupta, and thank you for your leadership in this very important stage. He has been working in this Digital Health, Digital Health Parliament and global leadership when the world was not thinking. So it’s a great, great contribution by you to the system because digital has taken a frenzy only during the COVID and the post -COVID. Before that, it was just like a digital e -government systems around the world. Now, before I say anything, I’ll be very general in comment on the global level and then touching a little bit on the ground level. What did cost the global ecosystem? Anish described when there was a financial crisis. What global south at that point of time called a crisis triggered by the global north.

I mean, naming that particular country. So there, and he described how beautifully President Obama. steered the United States out of that very complicated and complex situation. Now, if you look at the shortages of the healthcare professionals to the global economy, what it costs, shortage is one part, it’s number. Maybe somewhere 100 ,000 people short, somewhere more number. What are the global implications? So the economic cost of these shortages of the healthcare workers, which is around, in all the categories, around 10 to 12 million, almost costs 15 % of the global GDP. And you can imagine that 15 % of the global GDP of $120 trillion economy. So it’s a huge, huge cost, just because we don’t have people. It has a multiplier effect, and it’s leading to the cascading effect on the various other segments of the society.

The other thing which is happening is that the healthcare workers are not getting paid. the global temperature rise, if you look at the climate and health, there is a latest Lancet report which brings very beautifully how the climate is driving health and leading to a different kind of a challenging situation. But also on this other side, I wanted to say that how health system is also contributing to the climate because one of the largest emitter on the planet. Now, given this situation, we know that so much is the shortages of the healthcare professionals and the nurses shortage is so much that Anisha will know better than I know that the US has a special visa for the nurses.

You may have a computer science degree but may not get a visa. But if you have a nurse experience certificate, you get a visa. So that is the level of the challenges which the world is facing. Now, we know that this is a challenge. What do we do? How do we do? How do we move forward? The other… Before I go to that, the other challenge is the aging population. If you look at Japan, if you look at the Nordic countries, the aging population number is rising. There are not many people to take care of that. Even if I have to get a health care worker in my village in eastern Uttar Pradesh, it’s so difficult.

Even if you want to pay the money, there are no people to serve you. So what do you do? One is, of course, the obvious solution that you train more number of people because there are a lot of people who are looking for the job. It’s not that people are not there. So how do you ramp up that capacity? I know in India, for creating a nursing school, you need to have hospitals, hospitals, and there are so many challenges in spite of setting up a lot of hospitals in the country. So one low -hanging fruit is the digital solutions. And on the top of that digital solution now is AI solution. Can I make one doctor serve 10 people?

Can I make one health care worker serve more than 5 times, 10 times more using the technologies management of the… system using the healthy ERP like multinational enterprises? are doing. The whole system is fragmented in the healthcare system. It should be in that ecosystem. The one good thing about the U .S. is that the doctor, the pharmacy, everybody is connected. So that at least fragmentation is not there in the U .S. system, but that fragmentation still exists in the U .K. system. But in India, that silo is very much there. So even if using this health ERP on the lines of corporate ERP, we are able to fix it, I think that will be a transformative approach of creating a very ecosystem approach where the health workers, the doctors, the nurses, those who want to volunteer and contribute, they will be all connected.

So that is one quick fix solution I see. The other I see that in the global market, and this was my pet project that particularly came out from the post -COVID that there are doctors who want to do more, but they have challenges. So how do you connect? a global or doctors without borders how can an Indian doctors so a patient in Kenya rather than Kenyan or Tinjanian patient traveling to India or if they have to travel they should travel only small portion rather than a big big time of two months three months so these technologies offers you that you can have your scans remotely you can upload send to doctor have all the diagnostic except the procedure which you are required to be there so it’s it’s not only a country health ecosystem but also a global health ecosystem which can which can be made available using the technologies and and then and I see that using that approach any best hospital or doctor the United States can be accessible to a patient in India or vice versa because a lot of Indian wants to consult a doctor in India my wife was in the US for 10 years is still believe in the Indian doctor and wants to have a medicine from India and this one so So 2 million, 20 million persons of Indian origin around the world.

So India can connect 1 .5 billion people within the country and 20 million people who still believe that I should have the Indian medicine, I should have the Indian doctors. So this is a huge, huge opportunity for India to take the leadership because you have the manpower, you have a lot of young people who enter the job market looking for the job, and you have the digital technology power. The only question is to putting these two together and make the nursing institutes, the hospital administration, the startups be all the part of the thriving ecosystem. I think if we can do it, we will have, we will really rather recreating or reimagining a healthcare system not only for India but for the entire world.

And this 15 % GDP, this global temperature rise, the climate health nexus, which I can talk about, these still will be a great enablement for the entire world. And I think that the government, the government of India, I think that the government of India will be able to do this. And I think that the government of India will be able to do this. And I think that the government of India will be able to do this. And I think that the government of India will be able to do this. And I think that the government of India will be able to do this. And I think that the government of India will be able to do this.

And I think that the government of India will be able to do this. And I think that the government of India will be able to do this. And I think that the government of India will be able to do this. And I think that the government of India will be able to do this. And I think that the government of India will be able to do this. there will be universal health access cutting across the boundaries not that

Dr. Gupta

within your boundaries but you can have access to rest of the words of the medicine of the supplies of the doctors of the procedures so I’ll stop here on this positive note and over to you thank you thank you so this is very interesting and you know I I always like optimism over technology even if you’re not optimism technology will move fast coming to use all you can’t you are an entrepreneur in technology while dr. Rajiv approved DTX you make DTX you have made amazing

Speaker 1

AI driven technologies what’s your take on capacity building do we have enough capacity to have more entrepreneurs like you we will have ideators like you but not entrepreneurs because we don’t have executors how do you define this thank you Rajiv ji while of course I will be speaking on the on that part of technology as well how we can create entrepreneurs you But I think more to the point that my fellow panelists talked about, I think technology, when it comes to capacity building, technology companies have a significant role because they influence how the current workforce is practicing. And also they influence how the next generation of workforce will get trained. So that way we have a dual responsibility.

And in that sense, I think there’s a design principle that every technology company should keep in mind or any budding entrepreneur should keep in mind. And that is that the way they design their AI or tech solutions, it should be in a manner that is scalable, not in terms of volume, but scalable in terms of complexity. Because if you’re building something. And if you’re providing the healthcare industry with something, then you have to particularly in a. a country like India where you have a diverse spectrum of digital maturity across various institutes. Some hospitals might be digital native, some of them might be completely analog. So in that sense, you have to have a product that hand -holds the healthcare workers through the digital transformation journey.

So the product is able to scale in complexity as the institutes scale in readiness. That’s how we have been building products. As an example, our EISU solution, its functionality ranges from basic remote vital monitoring to more complex smart alerts and advanced clinical decision support systems based on the readiness of the clinicians. And that’s something that every institution needs. So I think that’s something that techpreneurs should keep in mind. impose AI or technology, rather the technology should adapt to the capacity, or rather it should be able to handhold the capacity and pull it up. One more point that I wanted to add was that just like technologists have been creating or co -creating the next generation of workforce when it comes to programmers and innovators, similarly I feel health tech companies have a responsibility in co -creating the next generation of healthcare workers.

So with the academies like Academy of Digital Health Sciences, I think technology companies or specifically health tech companies should come forward and co -design some hands -on courses as well, like the one ma ‘am mentioned, the professional nurses course. So that, we’re able to expose the students early on to

Dr. Gupta

so i’ll have a few two questions to the you know experts before we move to audience questions uh this is to uh first dr sarrajit to you because you’re a regulator you made an important point that you want to change i mean you have already done that by incorporating digital health as a part of the education you know when i was writing the education policy my biggest worry was technology moves with the pace that you can’t change your curriculum every now and then because by the time you go to the academic council governing board new technology has come so you is there a way you’re looking at to make i think you talked about cme but is that the way we should look at looking at training all professions you know adding cmes rather than changing curriculum every now and then because that’s going to be really tough

Dr. Sarvajit Kaur

um curriculum changes normally occurs say once in a decade and that also is a long process when we brought out the bsc nursing change we took almost three years to bring about a change with all the you know there’s a whole process to it including the public amends and bringing about changes so yes at that point of time whatever is the best for the nursing students we have tried to do that but at the same time we also need to understand that there is this like 40 uh lakhs like you know four million nurses already out there in the country in different states whose competencies also need to be built because they are the ones who are working be it in the rural or be it in the specialized hospitals and for this as a regulator we push upon having simulation centers that’s what we are saying one should be in every district so that you know there are some states who have already started taking this like you know we had Nira Maya in Uttar Pradesh and we had Union in Bihar where they are building up these competency centers integrating the digital technology with it certifying it so that and linking it to the CNE so the nurse carries it forward with her there are incentives there for the nurses to come up for these programs and to better integrate this into the health systems a lot needs to be done in this and as you’re also aware with the digital health academy we are now working towards having a one year or maybe a two -year program we are still working that out so when this also comes as a specialization more takers will be there I think it will again disseminate down it’s a mammoth task no doubt.

Dr. Gupta

Dr. Rajiv I wanted to ask you on that point only that you have drug inspectors across the country who were in the conventional you know world what are you doing for them to understand and of course for pharmacists too I want your point.

Dr. Rajiv

So, yeah, so before moving to that, I just had one comment on this one, the curriculum change, right? So this actually point comes again and again in pharma education also. And colleges and teachers say that we are not allowed to change. It is governed by PCI. But always I say one point. See, PCI or anybody which actually sets the courses, they give you the minimum which should happen. They don’t say that don’t go beyond this. So you have all open at the top. Whatever you want to do, you keep this minimum. Plus you go on adding if you want to. So if pharma is not having a course on innovation or management or any modern technology.

Computer programming, PCI doesn’t say that you can’t do it. PCI says that you keep pharma papers over and above this. If I want to keep innovation paper, I’m free to do that.

Dr. Gupta

Rajiv, I’m sure this message will go viral, but the problem is how many people read it in that manner. You know, when we started courses, we put a line. The contents of this course will change based on the developments in the field. And we had really tough time telling that it can be in the prospectus. I said, we have to do that. The field is changing. And that brings me to Anish, because always the problem comes, what do you do to governments? You know, when you’re talking of technology, we can have regulators change it. We can have, you know, councils change it. But how do politicians get changed? Do we have a crash course for them?

Anish

Well, so here is the, there’s a, that’s a spicy question, but let me, let me, let me handle it. Well, this is in the U .S. It was funny when you saw the senators asking Mark Zuckerberg questions that were not very smart. So there was obviously a push to get education about what the technology means. But let me, let me shift that question in a different way. A lot of this assumes that the job to be done is the same. but you’ve introduced new tools so that you train people on how to do the same job but with the new tools. The politician or the policymaker is often focused on the outcome or the objective, the problem to be solved.

And it may be that we spent 10 years doing it this way, we’ve funded it, organized it, and you should be educated on how technology will influence it. But at some point, there’ll be a flip. Hey, I’ve got an entirely new way of solving that outcome. And why don’t we reorganize this whole thing that takes advantage of new capacity that wasn’t possible but for the technology? Earlier in this conference, we heard from Sunil Wadwani from the Wadwani Foundation. He talked about tuberculosis deaths, half a million deaths. And he said a portion of those deaths come from individuals. Who obviously get later, you know, they’ve been detected later. And then others, they dropped off their medications too early.

So you’ve got these sort of error rates on both sides. And so you have a nurse or someone in the community, asha workers, someone helping, engaging. And so you could think about politicians saying, okay, do I have to fund a new program to do this technology? Or it turns out they’ve come up with an entirely new AI -based detection system, and they found 25 % more tuberculosis cases, not because they’ve educated, but they’ve introduced a whole new concept that you can change the diagnosis model through voice. You cough into a phone and it tells you, I’m paraphrasing what I heard earlier today. So this is the moment where the more we have flexibility in the political dilemma, dialogue, and some say this is zero -based budgeting that’s changed the way we fund our government.

There are lots of policy debates. but if you start with the principle that there’s a problem to be solved, we have too many people dying from tuberculosis too early. Now, let me say, look, we’ve got programming and funding and staff and people that do things to do this, but now a new technology shows up that allows me to think of this in an entirely new way and only possible to implement the strategies that come from this because it exists. That is a whole level of training that’s not training, oh, here’s how the buttons work. That is connecting the dots on what the capacity is to fundamentally reimagine the way to go about this. And so not to go back to capacity building, but I have coined this term innovation pipeline management in government.

DARPA, very famously, it’s our research arm in the U .S. government, sets ambitious but achievable targets and then lets professors, entrepreneurs, innovators sort of come up with ideas. And so you want to have… You want to have a stage gate to test ideas. You want to test more ideas. Then some of them graduate to the next stage and then you want to sort of validate those successes. And then you want policymakers to scale the ideas that work. and so I think your question was meant to it was sort of funny, the politicians need to be trained but there’s also some seriousness which is it can also be the vehicle by which we fundamentally re -imagine the way to go about it and then that brings a whole new cycle.

So that’s the positive side of

Dr. Gupta

Thank you Anish so much and now let’s get to the public questions so any audience questions yeah, you first

Speaker 2

Hi Anish, thank you for your inputs as someone who has been as an entrepreneur, also coming from a Catholic background, researching brain and AI and has spent a lot of time in the US, last four years in US and India. Be specific to the question because we have less time here. Context to what you were saying, the need for the digital portions. So if somebody has come up with a solution for mental health, for the professionals themselves, like the nurses and the doctors what would be a good platform because right now it’s like you educate them for the need of it and then the skills and the outcome get measured. what will be a better way to scale this because the need is there we see it we work with kids also doing that and we see the same need for professionals as well right and it’s contextualized to the Indian context as well what will be the good platform to sort of take this to scale when such needs exist with all professions as well I do have a separate question on the pricing with India so as two ventures that I’ve been part of that have scaled pretty well in the US one of them has become 100 million revenue the other has taken public route in the US but they failed miserably on the pricing here so spending two years up front here we couldn’t get the same product to work at the pricing here so what are your suggestions for how to make pricing work for India when you have the intent to solve for India as well so those are my two questions

Dr. Gupta

because if you go back and look to GDHS session on pricing of digital health you will get a detailed answer from those who build it globally so that will help you solve that problem and the other one does someone want to take an answer it

Dr. Sarvajit Kaur

answering your question from the regulatory point so uh we have for the nurses we have linked 150 cne hours and we have linked it to the renewal of their registration every five years so now nurses have to mandatorily do these courses then only their license will get renewed so there’s a lot of need to have these kinds of courses there are some platforms where these courses are put free of cost inc being one of them this i got this swam so i’m sure there are a lot of opportunities uh for you to you know take up anything that works for the nurses the technical experts have to you know take a look at it to see if it’s okay and then we can take it

Speaker 2

right now it’s developed by doctors for doctors but it can certainly be i’d love to take inputs from you where to take it forward thank you

Dr. Gupta

after this dr freddy yeah

Dr. Freddy

thank you very much uh uh my very simple question is that i am born before technology and suddenly bombarded with the last four five years you and the times are like this the fate is this that i’m from best colleges being a faculty and now join era medical colleges need faculty medicine and suddenly this institution is in a hole into ai now people like me who worked with mci and the curriculum has already been changed but believed me that nothing has changed because i actually had a audition also my question is that how are you emphasizing in future there are people who are supposed to implement ai people who are supposed to train these people in gen z now who themselves have no between so there’s a dilemma between them do you have any solution for that so that at least people who have been trained now are being trained by people who are inverted commas not trained that’s my worry

Dr. Gupta

so i will ask around in one minute

Speaker 1

sure yes so uh I think there are still people far and few in between who can be those ambassadors for change. It’s just a matter of giving them the tools, being able to, you know, get them on the platform of university or digital health sciences academy so that they’re able to train or build capacity at scale. That’s the only way. Otherwise, we don’t have enough people to do it one on one or, you know, in a physical capacity. We have to use virtual tools even for that. And at the same time, I think there shouldn’t be a bar at offer, you know, a certain experience or a number of years of teaching for these kind of courses.

So this has to be age agnostic, I feel.

Dr. Gupta

Rajiv, 30 seconds for you and then we have to close.

Dr. Rajiv

No, we have to close because we are running out of time. We have to launch also one thing. So I think answer lies in the system itself. If I just give one example, remote surgeries. The doctors who were trained 30 years ago, they were not trained on remote surgeries. Today, they are doing remote surgeries. How did they shift? So, I mean, this is something. I don’t think that these trainings and capacity building should be restricted from within the profession. Whosoever is suitable for those trainings, they should be engaged. It’s a continuous process in regulatory system. Our inspectors and drug controllers, they actually are trained into the modern, in basically approving and reviewing these medical devices also.

It was not there when they were appointed. So everybody is getting upgradation and there are systems.

Dr. Gupta

Yeah, and in our courses, we have seen 80%. 80 % of the people. are above 20 years, highest is 50 years after MBBS. So I think age is not a thing, it’s a mindset thing.

Speaker 3

Rajin, I’d also like to add about the consortium of innovative healthcare universities.

Dr. Gupta

We have to launch this and close this session. We have a very strict timing here. And I see that clock ticking up.

Speaker 3

I understand.

Dr. Gupta

So we’re launching the Global AI Academy, which you will see is about training people. You have platforms, it’s not about that. Together, yes. Here we go, something’s happening. There it is, a screen. Oh, that is coming. It’s coming and going. Yeah. So it’s never about the platform, it’s about the mindset. And start now if you have not. Thank you. Thank you very much. A big round of applause. Thank you. Thank you. Thank you. Thank you. Thank you. you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (31)
Factual NotesClaims verified against the Diplo knowledge base (2)
Confirmedhigh

“The panel opened with Dr Rajiv emphasizing that transforming India’s health‑workforce depends more on a professional mindset shift than on simply adding new technologies.”

The knowledge base records that all three speakers highlighted that successful technology integration requires a fundamental mindset shift rather than just technical solutions [S15] and that the discussion centered on this theme [S27] and [S26].

Confirmedmedium

“Dr Gupta reinforced this view, describing the discussion as centred on “mindset change rather than just technology”.”

Other sources note that the panel framed digital transformation as a mindset issue, emphasizing human-centered change over pure technology [S84] and stressing the need to shift focus from technology metrics to impact on people’s lives [S85].

External Sources (87)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-are-building-industrial-innovation-bridges — thank you once again to our moderator and to all our distinguished panelists I would now invite all the speakers to plea…
S6
Fireside Chat The Future of AI & STEM Education in India — Welcome to the panel, sir. Let me now invite Dr. Raj Kumar, Founding Vice -Chancellor at O .P. Jindal University. Dr. Ra…
S7
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S8
S10
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — -Dr. Rajendra Pratap Gupta- Advisor to Health Minister, instrumental in defining ABDM white paper, involved in National …
S11
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Evidence:Dr. Gupta was instrumental in defining the first white paper around ABDM starting in 2019-2020
S12
Dynamic Coalition Collaborative Session — – Dr. Rajendra Pratap Gupta- Dr. Muhammad Shabbir – Dr. Rajendra Pratap Gupta- Wout de Natris Dr. Gupta emphasizes tha…
S13
AI 2.0 Reimagining Indian education system — -Suresh Yadav- Executive Director at Commonwealth Secretariat, former advisor to President Mukherjee, expertise in finan…
S14
AI 2.0 The Future of Learning in India — Suresh Yadav, Executive Director of the Commonwealth Secretariat, argued that this moment requires complete reimagining …
S15
Capacity Building in Digital Health — Dr. Suresh Yadav provided a global perspective, explaining that healthcare worker shortages cost approximately 15% of gl…
S16
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned.
S17
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S18
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S19
https://dig.watch/event/india-ai-impact-summit-2026/capacity-building-in-digital-health — Thank you. Thank you, Professor Gupta, and thank you for your leadership in this very important stage. He has been worki…
S20
Keeping up with Smart Factories / DAVOS 2025 — – Padraig McDonnell- Anish Shah
S21
WSIS+20 Open Consultation session with Co-Facilitators — – **Jennifer Chung** – (Role/affiliation not clearly specified)
S23
Global Health Diplomacy — Andrew F. Cooper is professor, Department of Political Science, University of Waterloo and distinguished fello…
S24
https://app.faicon.ai/ai-impact-summit-2026/indias-ai-leap-policy-to-practice-with-aip2 — As Dr. said, this is really the earthquake of AI and we are at the epicenter. And as you can see, after five days, we ar…
S25
Capacity Building in Digital Health — -Dr. Sarvjeet Kaur: Secretary of the Indian Nursing Council, represents 2.2 million nurses, regulatory role in nursing e…
S26
Main Topic 2 –  GovTech Dynamics: Navigating Innovation and Challenges in Public Services — Central to this agenda is the belief that technological adoption should be led by a fundamental change in mindset, focus…
S27
Capacity Building in Digital Health — Agreed with:Dr. Sarvjeet Kaur, Zaw Ali Khan — Technology adoption requires mindset change rather than just technical tra…
S28
The Future of Digital Agriculture: Process for Progress — Technologies must be easily accessible, economically viable for the lowest-income groups, relevant to the context, and s…
S29
The digital economy and enviromental sustainability — The technology should be user-friendly
S30
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — Niraj Verma: Thank you. So in India, we look at connectivity as a great enabler. And we are connecting some 6.4 lakh vil…
S31
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — In summary, digital health has achieved technical maturity but lacks organizational maturity. Comprehensive understandin…
S32
MedTech and AI Innovations in Public Health Systems — I would just also like to address the work culture issue that you have mentioned. In fact, if we can just sensitize, if …
S33
Indias Roadmap to an AGI-Enabled Future — Professor Jayadeva identified fundamental challenges in India’s research ecosystem that extend beyond funding to cultura…
S34
Building the Workforce_ AI for Viksit Bharat 2047 — Speaker 3 argued that traditional government systems that simply respond to problems need to evolve into intelligent, ad…
S35
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Academia’s role involves mainstreaming computational biology, neurosymbolic AI, and AI-first life sciences education to …
S36
Leaders TalkX: ICT Applications Unlocking the Full Potential of Digital – Part II — The inadequacy of established models and strategies, often adopted by the most affluent economies, was criticised for ne…
S37
Keynotes — Digital policies are at an inflection point, necessitating adaptability to the fast-paced evolution of emerging technolo…
S38
Keynotes — Technology must serve humanity with human-centric approach The approach must be clearly human centric, and we must avoi…
S39
Capacity Building in Digital Health — Khan emphasized the dual responsibility of technology companies in influencing current workforce practices and training …
S40
Capacity Building in Digital Health — Focus on outcome-based policy making rather than process-based training when introducing new technologies Partner with …
S41
WS #162 Overregulation: Balance Policy and Innovation in Technology — James Nathan Adjartey Amattey: So thank you very much, Nicolas, for that introduction. My name is James Amate. I am f…
S42
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — Legal and regulatory | Development | Human rights Rather than viewing regulation as hampering innovation, new regulator…
S43
How AI Drives Innovation and Economic Growth — Summary:The speakers show broad agreement on AI’s transformative potential for development but significant disagreements…
S44
How AI Drives Innovation and Economic Growth — The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on impl…
S45
Open Forum: A Primer on AI — In summary, the widespread adoption of AI presents opportunities and challenges. While it can boost equality, address cl…
S46
Conversational AI in low income &amp; resource settings | IGF 2023 — Rajendra Pratap Gupta:And I mean, I will add to that, that in the other dynamic coalition on internet and jobs, we had a…
S47
Reskilling for the Intelligent Age / Davos 2025 — Joe Ucuzoglu: Well, this is going to be essential. So a bunch of the conversation this week has been about the agentic …
S48
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — High level of consensus with complementary rather than conflicting viewpoints. The agreement suggests a mature understan…
S49
AI 2.0 Reimagining Indian education system — There’s unexpected strong consensus among speakers from different backgrounds (policy, academia, regulation) that increm…
S50
New Colours of Knowledge — – improving the quality and the relevance of adult education programmes; – building a coherent, high-quality and adaptab…
S51
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — A mindset issue persists which hinders the shift towards digitalization. Changing mindsets while implementing at scale …
S52
Capacity Building in Digital Health — Summary:All three speakers emphasize that successful technology integration in healthcare requires fundamental mindset s…
S53
Capacity Building in Digital Health — All three speakers emphasize that successful technology integration in healthcare requires fundamental mindset shifts an…
S54
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — In conclusion, the panel discussion provides an encompassing overview of Microsoft’s initiatives pertaining to health ou…
S55
GermanAsian AI Partnerships Driving Talent Innovation the Future — Mr. Jaiswal outlines comprehensive educational reforms in India designed to prepare students for AI and technology integ…
S56
https://dig.watch/event/india-ai-impact-summit-2026/capacity-building-in-digital-health — Can I make one health care worker serve more than 5 times, 10 times more using the technologies management of the… sys…
S57
MedTech and AI Innovations in Public Health Systems — I would just also like to address the work culture issue that you have mentioned. In fact, if we can just sensitize, if …
S58
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — This comment identifies a fundamental structural problem – fragmentation of innovation ecosystems that prevents scaling….
S59
https://app.faicon.ai/ai-impact-summit-2026/capacity-building-in-digital-health — There are lots of policy debates. but if you start with the principle that there’s a problem to be solved, we have too m…
S60
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Academia’s role involves mainstreaming computational biology, neurosymbolic AI, and AI-first life sciences education to …
S61
International Cooperation for AI &amp; Digital Governance | IGF 2023 Networking Session #109 — Scalability has always been a challenge for open source initiatives and ICT for development. However, the Indian MOSSIP …
S62
Leaders TalkX: ICT Applications Unlocking the Full Potential of Digital – Part II — The inadequacy of established models and strategies, often adopted by the most affluent economies, was criticised for ne…
S63
Digital divides &amp; Inclusion — Ayita Gangavarpu:Hello, good morning, everyone. Thank you so much for the opportunity. I’m Ayita Gangavarpu, the coordin…
S64
WS #69 Beyond Tokenism Disability Inclusive Leadership in Ig — Nirmita Narasimhan: Sure. Thank you. Thank you, Dr. Shabbir. So let me approach this from a perspective of somebody who …
S65
Opening of the session — Thought Provoking Comments
S66
Knowledge Café: Youth building the digital future – WSIS+20 Review and Beyond 2025 — These key comments fundamentally shaped the discussion by introducing three critical shifts: from technology-centered to…
S67
Agenda item 5: Day 2 Afternoon session — The Australian representative highlighted the utility of the checklist for conducting self-assessments, which can shed l…
S68
Main Session 1: Global Access, Global Progress: Managing the Challenges of Global Digital Adoption — The tone of the discussion was largely optimistic and solution-oriented. Speakers highlighted positive examples of how t…
S69
[Parliamentary Session 4] Fostering Inclusive Digital Innovation and Transformation — The tone of the discussion was largely constructive and solution-oriented. Panelists shared insights and examples from t…
S70
WS #214 Youth-Led Digital Futures: Integrating Perspectives and Governance — The tone of the discussion was largely constructive and solution-oriented, with speakers offering insights from differen…
S71
Revamping Decision-Making in Digital Governance and the WSIS Framework — The discussion maintained a constructive and collaborative tone throughout, with speakers building upon each other’s poi…
S72
High-Level Session 2: Transforming Health: Integrating Innovation and Digital Solutions for Global Well-being — The tone of the discussion was largely optimistic and forward-looking. Panelists acknowledged challenges but focused on …
S73
Opening of the session — Convergence necessary for progress with limited time.
S74
Main Session on Sustainability &amp; Environment | IGF 2023 — Policies in the digital sector and environmental context should not only consider ideal circumstances but also real circ…
S75
Closing Session  — Minister Tijani’s comment solidified the proactive framework as the summit’s core achievement and elevated the discussio…
S76
#IGF2020: The week so far — Even with this clear need for action, many discussions (with some notable exceptions) have not evolved beyond a survey o…
S77
!” — In these circumstances, tailored redistributive policies are likely to be effective for promoting growth – for example, …
S78
AI Innovation in India — The tone was consistently celebratory, inspirational, and optimistic throughout the discussion. Speakers expressed pride…
S79
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S80
AI for food systems — The tone throughout the discussion was consistently formal, optimistic, and collaborative. It maintained a ceremonial qu…
S81
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S82
Friday Closing Ceremony: Summit of the Future Action Days — The tone was energetic, passionate and optimistic throughout. Speakers conveyed a sense of urgency about the need for yo…
S83
Panel Discussion AI in Healthcare India AI Impact Summit — I think that shouldn’t be so, right? And coming back, that is where I think it would be great to introduce Dr. Aditya Ya…
S84
Open Forum #45 Advancing Cyber Resilience of Critical Infrastructure — These key comments fundamentally shaped the discussion by challenging conventional approaches and introducing more nuanc…
S85
Panel Discussion: 01 — Focus should shift from technology metrics to actual impact on people’s lives
S86
29, filed Jan. 22, 2010, at 9-10. — As online learning systems are deployed, research must be designed to measure their effectiveness-including ‘realtime, i…
S87
Digital Entrepreneurship September 2018 — Online business registration systems can lower the cost of entry for new players, increasing competitive pressure for in…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Rajiv
2 arguments141 words per minute494 words208 seconds
Argument 1
Mindset change is more critical than technology alone (Dr. Rajiv)
EXPLANATION
Dr. Rajiv argues that shifting professional mindsets is more essential than merely introducing new technologies. He believes that without a change in attitudes and thinking, technological adoption will be limited.
EVIDENCE
He notes that the biggest opportunity for pharmacists lies in the retail chain and that a strong change-management effort is needed, emphasizing that it is a professional and mindset change rather than just a technological upgrade [8].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources stress that a shift in mindset is a prerequisite for successful technology adoption in health, highlighting its centrality in digital health capacity building [S26][S27][S31].
MAJOR DISCUSSION POINT
Major discussion point 1
AGREED WITH
Dr. Gupta, Dr. Freddy, Speaker 1
Argument 2
Regulatory bodies set minimum curriculum standards, allowing institutions to add innovation and management modules (Dr. Rajiv)
EXPLANATION
Dr. Rajiv explains that regulatory agencies such as the Pharmacy Council of India (PCI) prescribe only minimum curriculum requirements, leaving room for institutions to introduce additional subjects like innovation, management, and modern technology. This flexibility can enable curricula to evolve beyond the baseline.
EVIDENCE
He states that PCI provides a minimum set of topics but does not forbid adding innovation or computer programming modules, allowing colleges to go beyond the baseline [133-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulatory frameworks such as the PCI are described as setting only baseline requirements while permitting colleges to introduce additional innovation, management, and technology subjects [S27][S15].
MAJOR DISCUSSION POINT
Major discussion point 2
DISAGREED WITH
Dr. Gupta, Dr. Sarvajit Kaur
D
Dr. Sarvajit Kaur
4 arguments171 words per minute973 words341 seconds
Argument 1
Building digital competencies through mandatory simulation labs and faculty training (Dr. Sarvajit Kaur)
EXPLANATION
Dr. Kaur describes how the nursing regulator has introduced compulsory simulation labs and equipped them with advanced tools to develop digital health competencies. Faculty members have also been trained to effectively use these resources.
EVIDENCE
She reports that five simulation labs have become mandatory, equipped with mannequins and VR, and that around 2,000 faculty members have been trained to operate these simulators for nursing students [13-14][26].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The creation of two national simulation centers and the training of roughly 2,000 faculty members for digital health education are documented in the capacity-building literature [S27][S15].
MAJOR DISCUSSION POINT
Major discussion point 1
AGREED WITH
Speaker 1
Argument 2
Updated BSc nursing curriculum now embeds AI, digital health, and simulation equipment (Dr. Sarvajit Kaur)
EXPLANATION
Dr. Kaur states that the 2021 revision of the BSc nursing curriculum integrates AI, digital health, and simulation-based learning. This change aims to build competencies in emerging technologies for future nurses.
EVIDENCE
She notes that the curriculum was changed in 2021 to emphasize AI and digital health, making five simulation labs mandatory and providing equipment such as VR and mannequins [11-14].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The 2021 revision of the BSc nursing curriculum that integrates AI, digital health and mandates five simulation labs is reported in the nursing council’s curriculum update [S27][S15].
MAJOR DISCUSSION POINT
Major discussion point 2
Argument 3
Ongoing upskilling via CME/CNE linked to registration renewal and district‑level simulation centers (Dr. Sarvajit Kaur)
EXPLANATION
Dr. Kaur outlines a continuous professional development system where continuing education (CME/CNE) hours are tied to license renewal, and simulation centers are being established at the district level. This creates a sustainable pathway for nurses to acquire digital skills throughout their careers.
EVIDENCE
She mentions linking 150 CNE hours to registration renewal every five years, the creation of national reference simulation centers, and state-level initiatives in Uttar Pradesh and Bihar to build competency centers that integrate digital technology [32-34][126-130][192].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Linking 150 CNE hours to license renewal and establishing district-level simulation centers are described as part of a sustainable professional-development framework [S27][S15].
MAJOR DISCUSSION POINT
Major discussion point 2
AGREED WITH
Dr. Gupta
Argument 4
Linking continuing education (CNE) hours to license renewal creates demand for digital health courses (Dr. Sarvajit Kaur)
EXPLANATION
Dr. Kaur explains that making CNE hours mandatory for license renewal incentivizes nurses to enroll in digital health courses, thereby driving uptake of such training. This regulatory lever ensures that digital competencies become a requirement for practice.
EVIDENCE
She states that 150 CNE hours have been linked to the renewal of nurses’ registration, making the completion of digital health courses a prerequisite for licensure [192].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Mandating CNE hours for registration renewal is highlighted as a regulatory lever that drives uptake of digital health training [S27][S15].
MAJOR DISCUSSION POINT
Major discussion point 6
S
Speaker 1
4 arguments132 words per minute583 words263 seconds
Argument 1
Technology solutions must be designed to scale with users’ capacity, hand‑holding them through transformation (Speaker 1)
EXPLANATION
Speaker 1 argues that health‑tech products should be built to adapt to the varying digital maturity of healthcare institutions, providing progressive support as users become more capable. This hand‑holding approach ensures adoption across both digitally native and analog settings.
EVIDENCE
He outlines a design principle that products should scale in complexity with institutional readiness, describing how their EISU solution ranges from basic remote vital monitoring to advanced clinical decision support depending on the user’s digital maturity [112-120].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Scalability and progressive hand-holding designs are recommended to ensure adoption across varied digital maturity levels [S28][S15][S31].
MAJOR DISCUSSION POINT
Major discussion point 1
AGREED WITH
Dr. Suresh Yadav
DISAGREED WITH
Dr. Freddy
Argument 2
Products should adapt to varying digital maturity of institutions, scaling in complexity as readiness grows (Speaker 1)
EXPLANATION
Speaker 1 reiterates that health‑tech offerings need to be flexible, allowing institutions with low digital readiness to start simple and later adopt more sophisticated functionalities. This scalability reduces barriers to entry and supports gradual transformation.
EVIDENCE
He emphasizes that solutions must be able to hand-hold users through the digital transformation journey, scaling in complexity as the institution’s readiness improves [112-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Adapting health-tech products to institutions’ digital readiness and providing incremental functionality is advocated in capacity-building guidance [S15][S28][S31].
MAJOR DISCUSSION POINT
Major discussion point 4
Argument 3
Health‑tech firms should co‑create hands‑on training modules with academic bodies to shape the next workforce (Speaker 1)
EXPLANATION
Speaker 1 calls for health‑tech companies to partner with academic institutions to design practical training modules, thereby influencing the skill set of future healthcare workers. Co‑creation ensures that technology and education evolve together.
EVIDENCE
He cites the example of collaborating with the Academy of Digital Health Sciences to co-design hands-on courses similar to the professional nurses course, exposing students early to digital tools [122-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Health-tech companies are urged to partner with academia to co-design practical training modules for future healthcare workers [S27][S15].
MAJOR DISCUSSION POINT
Major discussion point 4
AGREED WITH
Dr. Sarvajit Kaur
Argument 4
There are many ideators but few executors; capacity building is needed to nurture true health‑tech entrepreneurs (Speaker 1)
EXPLANATION
Speaker 1 observes that while many individuals generate ideas, there is a shortage of people who can execute them into viable health‑tech ventures. Strengthening capacity building programs is essential to convert ideation into entrepreneurship.
EVIDENCE
He notes that the panel discussed having many ideators but not enough executors, emphasizing the need for capacity building to create true health-tech entrepreneurs [108-111].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The gap between idea generation and execution, and the need for entrepreneurship capacity building, is identified in sector analyses [S15][S31].
MAJOR DISCUSSION POINT
Major discussion point 5
D
Dr. Gupta
2 arguments122 words per minute848 words414 seconds
Argument 1
Frequent curriculum revisions are impractical; continuous professional development is a more feasible route (Dr. Gupta)
EXPLANATION
Dr. Gupta points out that overhauling curricula is a lengthy process, making it difficult to keep pace with rapid technological change. He suggests that ongoing CME/CNE programs offer a more practical way to keep professionals up‑to‑date.
EVIDENCE
He explains that curriculum changes occur only once in a decade, taking three years to implement, and therefore recommends using CME/CNE to continuously update skills instead of repeatedly revising curricula [142-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Continuous professional development is presented as a practical alternative to infrequent curriculum overhauls in digital health implementation studies [S31][S27].
MAJOR DISCUSSION POINT
Major discussion point 2
AGREED WITH
Dr. Sarvajit Kaur
DISAGREED WITH
Dr. Rajiv, Dr. Sarvajit Kaur
Argument 2
Age is not the limiting factor; a growth mindset is essential for adopting AI and digital tools (Dr. Gupta)
EXPLANATION
Dr. Gupta argues that age does not determine the ability to adopt AI; rather, a growth mindset is crucial. He emphasizes that upskilling should focus on changing attitudes rather than targeting specific age groups.
EVIDENCE
He cites observations from courses where 80 % of participants were over 20 years old, with the oldest being 50, concluding that mindset, not age, is the key factor [219-220].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence points to mindset, rather than age, as the decisive factor for AI adoption among health professionals [S26][S27][S31].
MAJOR DISCUSSION POINT
Major discussion point 8
AGREED WITH
Dr. Freddy
D
Dr. Suresh Yadav
3 arguments189 words per minute1358 words429 seconds
Argument 1
Global shortage of health workers costs ~15 % of world GDP; AI can multiply individual worker capacity (Dr. Suresh Yadav)
EXPLANATION
Dr. Yadav quantifies the economic impact of health‑worker shortages, estimating a cost of about 15 % of global GDP, and proposes AI as a multiplier that can significantly increase the productivity of each worker.
EVIDENCE
He states that the shortage of healthcare professionals costs roughly 15 % of the $120 trillion global GDP, and suggests AI solutions could enable one doctor or health worker to serve many more patients [53-55][79-80].
MAJOR DISCUSSION POINT
Major discussion point 3
Argument 2
Implementing health‑ERP ecosystems can connect fragmented providers and reduce silos (Dr. Suresh Yadav)
EXPLANATION
Dr. Yadav highlights the fragmented nature of India’s health system and proposes health‑ERP platforms, modeled on corporate ERP systems, to integrate doctors, pharmacists, nurses, and volunteers into a unified ecosystem.
EVIDENCE
He describes the current siloed structure, contrasts it with the integrated US system, and argues that a health-ERP could transform the fragmented Indian landscape into a connected ecosystem [81-86].
MAJOR DISCUSSION POINT
Major discussion point 3
AGREED WITH
Speaker 1
DISAGREED WITH
Speaker 2
Argument 3
Telemedicine and cross‑border doctor‑patient platforms can extend specialist access worldwide (Dr. Suresh Yadav)
EXPLANATION
Dr. Yadav envisions a global health network where patients can receive specialist consultations remotely, reducing the need for long travel. He sees this as an opportunity for India to serve both its domestic population and the diaspora.
EVIDENCE
He explains that remote scans can be uploaded for diagnosis, allowing Indian doctors to treat patients in Kenya or Indian expatriates abroad, potentially connecting 1.5 billion domestic users with 20 million overseas Indians [88-92].
MAJOR DISCUSSION POINT
Major discussion point 3
S
Speaker 2
1 argument231 words per minute310 words80 seconds
Argument 1
Pricing models that succeed in the US often fail in India; solutions must be tailored to the Indian market context (Speaker 2)
EXPLANATION
Speaker 2 points out that digital‑health products that achieve high revenues in the United States frequently encounter pricing resistance in India, indicating the need for market‑specific pricing strategies.
EVIDENCE
He shares personal experience that two ventures which generated $100 million in the US could not replicate the pricing success in India despite two years of effort, asking for suggestions on adapting pricing for the Indian context [187-190].
MAJOR DISCUSSION POINT
Major discussion point 6
DISAGREED WITH
Dr. Suresh Yadav
S
Speaker 3
1 argument144 words per minute15 words6 seconds
Argument 1
Formation of a consortium of innovative healthcare universities to foster collaboration and entrepreneurship (Speaker 3)
EXPLANATION
Speaker 3 proposes creating a consortium that brings together innovative healthcare universities, aiming to promote collaboration, research, and entrepreneurship in the health‑tech sector.
EVIDENCE
He briefly adds a comment about the consortium of innovative healthcare universities during the closing remarks [221].
MAJOR DISCUSSION POINT
Major discussion point 5
A
Anish
1 argument172 words per minute664 words231 seconds
Argument 1
Politicians need outcome‑focused education and a structured innovation pipeline (e.g., DARPA‑style) to adopt and scale new technologies (Anish)
EXPLANATION
Anish argues that policymakers should receive training focused on outcomes and be part of a structured innovation pipeline, similar to DARPA’s stage‑gate model, to effectively evaluate and scale emerging technologies.
EVIDENCE
He describes a DARPA-style pipeline with stage-gates for testing ideas, validating successes, and then having policymakers scale effective solutions, emphasizing the need for outcome-focused education for politicians [179-184].
MAJOR DISCUSSION POINT
Major discussion point 7
D
Dr. Freddy
2 arguments149 words per minute166 words66 seconds
Argument 1
Need for “ambassadors” and age‑agnostic training to bridge gaps between senior faculty and new AI tools (Dr. Freddy)
EXPLANATION
Dr. Freddy highlights the challenge of training senior faculty members who lack exposure to AI tools, calling for age‑agnostic, ambassador‑driven programs that can bridge this gap and promote widespread adoption.
EVIDENCE
He expresses concern that older educators may lack AI training and stresses the need for mindset change irrespective of age, noting that upskilling must be age-agnostic [195-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Age-agnostic, ambassador-driven programs are recommended to bridge senior faculty gaps and promote mindset change across all ages [S27][S31][S26].
MAJOR DISCUSSION POINT
Major discussion point 1
AGREED WITH
Dr. Gupta
DISAGREED WITH
Speaker 1
Argument 2
Upskilling must target mindset change across all age groups (Dr. Freddy)
EXPLANATION
Dr. Freddy reiterates that effective upskilling should focus on changing mindsets rather than targeting specific age cohorts, ensuring that both younger and older professionals can adopt AI and digital health tools.
EVIDENCE
He reiterates that the barrier is not age but mindset, emphasizing that training programs should be designed to engage learners of any age [195-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building literature emphasizes that upskilling should focus on mindset transformation for learners of any age, not on age-based targeting [S27][S31][S26].
MAJOR DISCUSSION POINT
Major discussion point 8
AGREED WITH
Dr. Rajiv, Dr. Gupta, Speaker 1
Agreements
Agreement Points
Mindset change is essential for adoption of digital health and AI technologies
Speakers: Dr. Rajiv, Dr. Gupta, Dr. Freddy, Speaker 1
Mindset change is more critical than technology alone (Dr. Rajiv) Age is not the limiting factor; a growth mindset is essential for adopting AI and digital tools (Dr. Gupta) Upskilling must target mindset change across all age groups (Dr. Freddy) Technology solutions must be designed to scale with users’ capacity, hand‑holding them through transformation (Speaker 1)
All four speakers stress that changing professional attitudes and cultivating a growth mindset are more decisive than merely introducing new tools; without this shift, technology uptake will remain limited [8][219-220][195-196][108-111].
POLICY CONTEXT (KNOWLEDGE BASE)
Recognised as a prerequisite for digital transformation, with the Global Digital Compact highlighting mindset as a barrier and the Davos 2025 discussion emphasizing a growth mindset for reskilling [S51][S47].
Continuous professional development (CME/CNE) linked to licensing is a practical way to keep health workers up‑to‑date
Speakers: Dr. Sarvajit Kaur, Dr. Gupta
Ongoing upskilling via CME/CNE linked to registration renewal and district‑level simulation centers (Dr. Sarvajit Kaur) Frequent curriculum revisions are impractical; continuous professional development is a more feasible route (Dr. Gupta)
Both speakers argue that, given the long cycles of curriculum change, tying mandatory continuing-education credits to licence renewal provides a flexible, ongoing mechanism for skill refreshment [126-130][192][142-148].
Technology solutions must be scalable and adapt to the varying digital maturity of health institutions
Speakers: Speaker 1, Dr. Suresh Yadav
Technology solutions must be designed to scale with users’ capacity, hand‑holding them through transformation (Speaker 1) Implementing health‑ERP ecosystems can connect fragmented providers and reduce silos (Dr. Suresh Yadav)
Both emphasize that products should start simple for low-maturity settings and grow in complexity as institutions become more digital, thereby overcoming fragmentation and enabling broader adoption [112-120][81-86].
Collaboration between health‑tech firms and academic bodies is needed to co‑create hands‑on training for the future workforce
Speakers: Speaker 1, Dr. Sarvajit Kaur
Health‑tech firms should co‑create hands‑on training modules with academic bodies to shape the next workforce (Speaker 1) Building digital competencies through mandatory simulation labs and faculty training (Dr. Sarvajit Kaur)
Both see value in joint development of practical training resources – simulation labs and faculty upskilling on the one hand, and co-designed curricula on the other – to embed digital skills early in health-professional education [122-124][13-14][26].
Age is not a barrier to digital upskilling; a growth mindset matters more
Speakers: Dr. Gupta, Dr. Freddy
Age is not the limiting factor; a growth mindset is essential for adopting AI and digital tools (Dr. Gupta) Need for “ambassadors” and age‑agnostic training to bridge gaps between senior faculty and new AI tools (Dr. Freddy)
Both speakers underline that older professionals can adopt AI if they embrace a growth mindset, and that training programmes should be age-agnostic and ambassador-driven [219-220][195-196].
POLICY CONTEXT (KNOWLEDGE BASE)
Supported by Davos 2025 insights that a growth mindset enables continuous upskilling regardless of age, and by IGF findings that mindset shifts are critical for digital adoption [S47][S51].
Regulatory frameworks set minimum standards but allow institutions to add innovative content
Speakers: Dr. Rajiv
Regulatory bodies set minimum curriculum standards, allowing institutions to add innovation and management modules (Dr. Rajiv)
Dr. Rajiv points out that bodies like the Pharmacy Council of India prescribe only baseline topics, leaving space for colleges to introduce additional subjects such as innovation, management, or programming [133-138].
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with calls for human-centric regulation that sets baseline safeguards while preserving creativity and entrepreneurship [S38][S42].
Similar Viewpoints
Both agree that chronological age does not determine the ability to adopt digital health tools; the decisive factor is a willingness to change one’s mindset, calling for age‑agnostic training approaches [219-220][195-196].
Speakers: Dr. Gupta, Dr. Freddy
Age is not the limiting factor; a growth mindset is essential for adopting AI and digital tools (Dr. Gupta) Upskilling must target mindset change across all age groups (Dr. Freddy)
Both stress that digital health products should be built to accommodate varying institutional readiness and to integrate fragmented actors into a unified ecosystem, thereby enabling scalable impact [112-120][81-86].
Speakers: Speaker 1, Dr. Suresh Yadav
Technology solutions must be designed to scale with users’ capacity, hand‑holding them through transformation (Speaker 1) Implementing health‑ERP ecosystems can connect fragmented providers and reduce silos (Dr. Suresh Yadav)
Unexpected Consensus
Both policymakers and health educators see formal curriculum overhaul as insufficient and advocate for alternative, outcome‑focused learning pathways
Speakers: Dr. Gupta, Anish
Frequent curriculum revisions are impractical; continuous professional development is a more feasible route (Dr. Gupta) Politicians need outcome‑focused education and a structured innovation pipeline (e.g., DARPA‑style) to adopt and scale new technologies (Anish)
Although they address different audiences (health professionals vs. politicians), both recognize that traditional curriculum changes are too slow and propose modular, outcome-oriented training mechanisms (CME/CNE for clinicians; innovation-pipeline education for policymakers) as more agile solutions [142-148][179-184].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects the push for outcome-based policy making and the consensus that incremental curriculum changes are inadequate, as highlighted in capacity-building discussions and the AI 2.0 Indian education re-imagining [S40][S49].
Overall Assessment

The panel shows strong convergence on three pillars: (1) a mindset‑oriented cultural shift, (2) continuous, competency‑linked professional development, and (3) scalable, ecosystem‑aware technology design coupled with academia‑industry co‑creation. Age is widely dismissed as a barrier, and regulatory flexibility is acknowledged as a lever for innovation.

High consensus across diverse stakeholders (regulators, clinicians, technologists, and policy experts) indicates a shared understanding that successful digital health transformation hinges on mindset change, flexible learning pathways, and interoperable, scalable solutions. This alignment suggests that policy and investment efforts can be coordinated around these common priorities to accelerate AI‑enabled health system reforms.

Differences
Different Viewpoints
How to keep health curricula up‑to‑date with rapid digital advances – revise formal curricula versus rely on continuous professional development (CME/CNE)
Speakers: Dr. Rajiv, Dr. Gupta, Dr. Sarvajit Kaur
Regulatory bodies set minimum curriculum standards, allowing institutions to add innovation and management modules (Dr. Rajiv) Frequent curriculum revisions are impractical; continuous professional development is a more feasible route (Dr. Gupta)
Dr. Rajiv stresses that the Pharmacy Council of India (PCI) only prescribes a minimum set of topics, so colleges can freely introduce innovation, AI and computer-programming modules beyond the baseline [133-138]. Dr. Gupta counters that formal curriculum changes take years (once per decade, three-year implementation) and therefore cannot keep pace with technology; he advocates using CME/CNE programmes to update skills continuously [142-148]. Dr. Sarvajit Kaur supports the CPD route by linking 150 CNE hours to licence renewal and building district-level simulation centres, which she presents as a sustainable way to upskill nurses [192][32-34]. The three speakers thus disagree on whether the primary lever should be flexible curriculum design or systematic CPD mechanisms.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension mirrors the outcome-vs-process debate in capacity-building literature, which advocates continuous professional development and co-created learning over static curriculum revisions [S40].
Primary mechanism for building health‑tech entrepreneurship capacity – product‑centric scalable design by tech firms versus human‑centric ambassador/mentor programmes
Speakers: Speaker 1, Dr. Freddy
Technology solutions must be designed to scale with users’ capacity, hand‑holding them through transformation (Speaker 1) Need for “ambassadors” and age‑agnostic training to bridge gaps between senior faculty and new AI tools (Dr. Freddy)
Speaker 1 argues that health-tech companies should embed a design principle where platforms adapt to an institution’s digital maturity, providing progressive functionality (e.g., the EISU solution) and co-creating hands-on courses with academia [112-120][122-124]. Dr. Freddy, by contrast, emphasizes the shortage of senior educators who can act as AI ambassadors and calls for age-agnostic, mentor-driven programmes to transfer knowledge, rather than relying on product scalability alone [195-196]. The disagreement lies in whether capacity building should be driven mainly by adaptable technology products or by human mentorship networks.
POLICY CONTEXT (KNOWLEDGE BASE)
Corresponds to discussions on the dual responsibility of tech firms for workforce training and the emphasis on human-centric mentorship models for entrepreneurship ecosystems [S39][S46][S38].
Approach to pricing digital‑health solutions in India – market‑driven adaptation versus scaling through large‑scale public‑sector ecosystems
Speakers: Speaker 2, Dr. Suresh Yadav
Pricing models that succeed in the US often fail in India; solutions must be tailored to the Indian market context (Speaker 2) Implementing health‑ERP ecosystems can connect fragmented providers and reduce silos (Dr. Suresh Yadav)
Speaker 2 reports that two US-successful ventures could not replicate pricing success in India despite two years of effort, asking for advice on Indian-specific pricing strategies [187-190]. Dr. Suresh Yadav focuses on creating a unified health-ERP ecosystem to overcome fragmentation, implying that scaling through integrated platforms will drive adoption, but does not address pricing challenges directly. Their positions diverge on whether the primary barrier is pricing strategy or systemic integration.
Unexpected Differences
Regulatory flexibility versus perceived rigidity of curriculum change
Speakers: Dr. Rajiv, Dr. Gupta
Regulatory bodies set minimum curriculum standards, allowing institutions to add innovation and management modules (Dr. Rajiv) Frequent curriculum revisions are impractical; continuous professional development is a more feasible route (Dr. Gupta)
It is unexpected that Dr. Rajiv, who highlights the freedom for institutions to go beyond the PCI minimum, assumes that curriculum change is a viable lever, while Dr. Gupta, aware of the same regulatory framework, stresses that actual curriculum revisions are slow and cumbersome, preferring CPD. The tension between perceived regulatory flexibility and practical rigidity was not anticipated given the shared regulatory context.
POLICY CONTEXT (KNOWLEDGE BASE)
Ties to the broader debate on regulation as an enabler rather than a barrier, and the need for flexible policy frameworks to support curriculum innovation [S38][S42][S51].
Optimism about AI solving workforce shortages versus concern about senior faculty’s lack of AI competence
Speakers: Dr. Suresh Yadav, Dr. Freddy
Global shortage of health workers costs ~15 % of world GDP; AI can multiply individual worker capacity (Dr. Suresh Yadav) Need for “ambassadors” and age‑agnostic training to bridge gaps between senior faculty and new AI tools (Dr. Freddy)
Dr. Yadav is confident that AI solutions can dramatically increase productivity and offset shortages [53-55][79-80], whereas Dr. Freddy worries that senior educators lack AI training, potentially limiting AI deployment unless targeted ambassador programmes are created [195-196]. The clash between a technology-centric solution and a human-capacity bottleneck was not foreseen.
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects the split between optimistic views of AI’s transformative potential and cautious concerns about skill gaps, bias, and job displacement, as documented in AI innovation forums and AI primer discussions [S43][S44][S45].
Overall Assessment

The discussion revealed three main axes of disagreement: (1) the best mechanism to keep health curricula current – flexible curriculum design versus continuous professional development; (2) the primary driver of health‑tech entrepreneurship capacity – scalable product design versus mentor‑driven ambassador programmes; (3) the dominant barrier to scaling digital health in India – pricing strategy versus systemic integration through health‑ERP ecosystems. While participants largely concurred on the importance of mindset change and the need for capacity building, they diverged on the institutional pathways to achieve these goals.

Moderate to high. The disagreements are substantive because they concern policy levers (curriculum vs CPD), industry strategy (product vs mentorship), and market dynamics (pricing vs integration). These divergences could affect the speed and effectiveness of digital health rollout, requiring coordinated policy‑industry dialogue to reconcile the differing approaches.

Partial Agreements
Both agree that a shift in professional mindset is essential for digital health adoption. Dr. Rajiv stresses that without a professional and mindset change, technology will not be embraced [8]. Speaker 1 complements this by proposing that technology should be built to accommodate varying readiness levels, effectively supporting the mindset transition through hand‑holding designs [112-118]. They differ on the lever (mindset advocacy vs product design) but share the same ultimate goal of enabling adoption.
Speakers: Dr. Rajiv, Speaker 1
Mindset change is more critical than technology alone (Dr. Rajiv) Technology solutions must be designed to scale with users’ capacity, hand‑holding them through transformation (Speaker 1)
Both emphasize that mindset, not age, determines successful upskilling. Dr. Gupta notes that 80 % of participants were over 20 years old, concluding that mindset matters more than age [219-220]. Dr. Freddy similarly argues that training should be age‑agnostic and focus on changing attitudes [195-196]. Their agreement lies in the importance of mindset; the slight difference is that Dr. Freddy calls for ambassador‑driven programmes, whereas Dr. Gupta highlights general growth‑mindset training.
Speakers: Dr. Gupta, Dr. Freddy
Age is not the limiting factor; a growth mindset is essential for adopting AI and digital tools (Dr. Gupta) Upskilling must target mindset change across all age groups (Dr. Freddy)
Takeaways
Key takeaways
Mindset change is more critical than technology alone for advancing healthcare professions. Regulators are embedding AI and digital health into curricula (e.g., BSc nursing 2021) and mandating simulation labs and faculty training. Continuous professional development (CME/CNE linked to license renewal) is preferred over frequent curriculum revisions. Global health workforce shortages represent a massive economic cost; AI can multiply individual worker capacity and reduce fragmentation via health‑ERP ecosystems. Health‑tech products must be designed to scale with users’ digital maturity, providing hand‑holding and increasing complexity as readiness grows. Health‑tech firms should co‑create hands‑on training modules with academic bodies to shape the next workforce. India’s entrepreneurial ecosystem has many ideators but few executors; fostering executors through consortia of innovative universities is needed. Pricing models successful in the US often fail in India; solutions must be tailored to the Indian market and linked to mandatory continuing education. Policymakers need outcome‑focused education and an innovation‑pipeline management approach (DARPA‑style) to adopt new technologies. Age is not the limiting factor; a growth mindset is essential, and training should be age‑agnostic.
Resolutions and action items
Launch of the Global AI Academy for training healthcare professionals. Establishment of district‑level simulation and competency centers for nurses across India. Linking 150 CNE hours to nursing registration renewal to drive uptake of digital health courses. Development of a one‑ to two‑year digital health specialization program for nurses. Encouragement for institutions to add optional innovation, management, and AI modules beyond the regulatory minimum. Proposal to develop a health‑ERP ecosystem to connect fragmented providers and reduce silos. Call for health‑tech companies to co‑design hands‑on digital health curricula with academic institutions. Formation of a consortium of innovative healthcare universities to foster collaboration and entrepreneurship. Ongoing upskilling programs for drug inspectors and pharmacists on modern digital tools. Use of remote‑surgery training as a model for continuous upskilling of senior clinicians.
Unresolved issues
Specific platform and scaling strategy for mental‑health digital solutions aimed at healthcare professionals in the Indian context. Effective pricing models for digital health products that work in the Indian market. Concrete plan for training politicians and policymakers on emerging technologies beyond conceptual suggestions. Concrete mechanisms to increase the number of health‑tech entrepreneurs/executors. Implementation details and stakeholder roadmap for the proposed health‑ERP ecosystem in India. Comprehensive strategy to upskill senior faculty lacking AI expertise while maintaining service delivery.
Suggested compromises
Use CME/CNE linked to license renewal as a compromise to frequent curriculum changes. Allow institutions to supplement the regulatory minimum curriculum with optional innovation and management courses. Adopt an age‑agnostic training approach, enabling both senior and junior professionals to serve as trainers. Mandate basic simulation labs while offering advanced optional modules to balance foundational requirements with higher‑level capacity building.
Thought Provoking Comments
The shortage of healthcare professionals costs around 10‑12 million jobs, which translates to roughly 15 % of global GDP – a massive economic drag that also intertwines with climate change impacts. Digital and AI solutions can multiply the reach of each worker, and a health‑ERP that connects doctors, pharmacists, nurses and volunteers could transform fragmented silos into an integrated ecosystem.
He quantifies the crisis in economic terms, links health workforce shortages to climate‑health dynamics, and proposes a systemic, technology‑driven solution rather than isolated interventions.
This comment shifted the conversation from sector‑specific challenges to a global, macro‑economic perspective, prompting others to discuss scalable digital platforms (e.g., health ERP) and the need for ecosystem‑wide integration. It set the stage for later remarks on AI‑enabled capacity multiplication and cross‑border health services.
Speaker: Dr. Suresh Yadav
We have integrated AI and digital health into the BSc nursing curriculum (2021), made five simulation labs mandatory, equipped them with VR and mannequins, and are training faculty (2,000 in Gurgaon) to ensure these tools are actually used, not just stored. We also link 150 CNE hours to licence renewal and are building district‑level competency centres.
Provides a concrete, multi‑layered roadmap for embedding digital competencies in nursing education, addressing both student and in‑service training, and tying learning to regulatory incentives.
Her detailed description introduced the theme of regulatory‑driven curriculum reform, leading to follow‑up questions from Dr. Gupta about curriculum agility versus CME, and inspired discussion on scaling simulation centres and continuous professional development.
Speaker: Dr. Sarvajit Kaur
Technology companies should design AI solutions that are scalable in complexity, not just volume – a product must hand‑hold healthcare workers through their digital transformation journey, adapting to varying digital maturity across institutions.
Shifts focus from technology as a static tool to a dynamic, user‑centric design philosophy that accommodates diverse readiness levels, emphasizing co‑creation of the future workforce.
Prompted a deeper conversation about the role of health‑tech firms in capacity building, influencing later remarks about co‑designing curricula and the need for age‑agnostic, ambassador‑type trainers.
Speaker: Speaker 1 (Tech entrepreneur)
We need an ‘innovation pipeline management’ in government – set ambitious targets, let entrepreneurs propose ideas, stage‑gate them, validate successes, and then have policymakers scale what works. This reframes political training from button‑pressing to re‑imagining problem‑solving with new tech.
Introduces a novel governance model for integrating technology into policy, moving beyond ad‑hoc training to systematic, outcome‑driven innovation adoption.
Created a turning point where the dialogue moved from individual capacity building to systemic change in how governments adopt technology, influencing subsequent discussion on political education and funding models.
Speaker: Anish
PCI sets only the minimum curriculum requirements; it does not forbid adding courses on innovation, management, or modern technology. Institutions can go beyond the baseline and incorporate new subjects as they wish.
Challenges the perception that regulatory bodies are rigid, highlighting flexibility within existing frameworks to innovate educational content.
Encouraged participants to view regulatory constraints as opportunities, reinforcing earlier points about curriculum agility and prompting Dr. Gupta’s reflection on how to communicate such flexibility to stakeholders.
Speaker: Dr. Rajiv
Curriculum changes happen once a decade and take years; therefore, we must rely on continuous professional development (CME/CNE) and district‑level simulation centres to upskill the existing 4 million nurses.
Acknowledges practical limits of formal curriculum revision and proposes a pragmatic, scalable solution for upskilling the current workforce.
Steered the conversation toward immediate, actionable strategies (simulation centres, CME linkage) rather than waiting for long‑term curriculum overhaul, influencing the later focus on modular training and digital platforms.
Speaker: Dr. Sarvajit Kaur
Age is not the issue; mindset is. Whether a 25‑year‑old or a 55‑year‑old, the barrier to adopting AI is the willingness to change, not the number of years of experience.
Distills a recurring theme into a concise insight that reframes the challenge of upskilling senior professionals.
Unified various concerns about older faculty and practitioners, reinforcing the need for mindset‑focused interventions and supporting the launch of the Global AI Academy.
Speaker: Dr. Gupta (paraphrased from discussion with Dr. Freddy and others)
Overall Assessment

The discussion was propelled forward by a series of high‑impact remarks that moved the dialogue from isolated challenges to systemic, macro‑level solutions. Dr. Suresh Yadav’s economic framing of workforce shortages and his vision of an integrated health‑ERP set a global context, while Dr. Sarvajit Kaur’s concrete regulatory actions demonstrated how that vision can be operationalized in education. The tech entrepreneur’s design principle and Anish’s innovation‑pipeline model introduced fresh perspectives on product development and governmental adoption, respectively, prompting participants to reconsider the roles of industry and policy. Dr. Rajiv’s clarification about regulatory flexibility and the repeated emphasis on mindset over age helped re‑orient the conversation toward pragmatic, scalable pathways—continuous professional development, simulation centres, and flexible curricula. Collectively, these comments reshaped the tone from problem‑identification to solution‑orientation, deepened the analysis of capacity‑building mechanisms, and aligned the participants around a shared agenda of mindset change, ecosystem integration, and agile governance.

Follow-up Questions
Do we have enough capacity to have more entrepreneurs like you, and how can we bridge the gap between ideators and entrepreneurs?
Understanding the entrepreneurial ecosystem is crucial to translate health tech ideas into executable solutions and scale impact.
Speaker: Speaker 1
Given the rapid pace of technology, should we rely on CME/continuous training rather than frequent curriculum revisions for health professions?
Curriculum updates are slow; a sustainable model for upskilling the workforce is needed to keep pace with technological change.
Speaker: Dr. Gupta
What training and upskilling are being provided to drug inspectors and pharmacists to understand digital health and AI?
Regulators must stay current with technology to ensure safety, compliance, and effective adoption of digital health tools.
Speaker: Dr. Gupta
How can we educate and train politicians/policymakers about emerging technologies so they can fund and adopt innovative health solutions?
Policy decisions depend on understanding new tech; effective training can enable rapid, evidence‑based health policy reforms.
Speaker: Dr. Gupta
What platform would be best to scale mental‑health solutions for healthcare professionals in India?
A scalable, context‑appropriate platform is needed to support the mental well‑being of clinicians and improve workforce resilience.
Speaker: Speaker 2
How can pricing models for digital‑health products be adapted for the Indian market?
Successful pricing strategies in the US often fail in India; affordable models are essential for widespread adoption.
Speaker: Speaker 2
How can we train current faculty (often older) to implement AI and enable them to train Gen‑Z students, addressing the generational skill gap?
Bridging the generational gap ensures that experienced educators can effectively teach AI to the next generation of health professionals.
Speaker: Dr. Freddy
Investigate the climate‑health nexus and its impact on healthcare delivery and emissions.
Understanding how healthcare contributes to and is affected by climate change can guide policies that reduce carbon footprints while improving health outcomes.
Speaker: Dr. Suresh Yadav
Assess the effectiveness and scalability of simulation centers and digital competency labs in nursing education across districts.
Evaluating these facilities will determine their role in building digital health competencies for the large existing nursing workforce.
Speaker: Dr. Sarvajit Kaur
Study the feasibility and impact of health‑ERP systems to reduce fragmentation in the Indian healthcare ecosystem.
A unified ERP could improve coordination among doctors, pharmacists, and nurses, enhancing efficiency and patient care.
Speaker: Dr. Suresh Yadav
Explore models for integrating AI‑driven remote diagnostics (e.g., AI for TB detection) into national health programs.
AI‑based detection can increase case finding and treatment adherence, addressing major public‑health challenges.
Speaker: Anish
Examine strategies for building a robust community‑pharmacy workforce to strengthen last‑mile healthcare delivery.
Community pharmacists are underutilized; research can identify barriers and solutions to expand their role in the health system.
Speaker: Dr. Rajiv
Research dynamic digital‑health curricula that can be updated continuously without formal curriculum revisions.
A flexible curriculum model would keep education aligned with fast‑evolving technologies.
Speaker: Dr. Sarvajit Kaur
Evaluate the impact of linking Continuing Nursing Education (CNE) hours to license renewal on nurse upskilling and patient outcomes.
Incentivizing mandatory education may improve competency and quality of care across the nursing workforce.
Speaker: Dr. Sarvajit Kaur
Investigate the role of innovation‑pipeline management (DARPA‑like models) in government health‑tech adoption.
Structured pipelines could accelerate translation of innovative ideas into scalable health policies and programs.
Speaker: Anish
Assess the scalability of AI‑driven remote‑surgery training for existing surgeons and its impact on clinical practice.
Understanding how seasoned clinicians adopt remote‑surgery tech can inform broader upskilling strategies.
Speaker: Dr. Rajiv

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.