AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence

AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Naveen GV explaining that Benchmark Gen Street is moving its 30-year-old EHS SaaS platform to an “AI-first” architecture to improve safety and predictive intelligence [1-6][9]. He introduced “Jenny AI”, an agent that can analyse a photographed hazard, auto-populate the observation form and ask follow-up questions when context is missing, eliminating manual data entry [25-36][40-43]. The same agent can accept spoken descriptions in Hindi, transcribe them and structure the information for validation, demonstrating multilingual support for non-technical users [46-58][59]. For incident investigation, the platform offers a “5Y AI” that iteratively asks why-questions to uncover root causes and then suggests corrective actions using a hierarchy-of-controls model [61-70][80-90]. A separate RISC-AI engine aggregates records from observations and incidents to surface patterns, risk heat-maps and predictive insights across the organization [124-133]. After the demo, Naveen highlighted that the next year will focus on autonomous agents that perform the heavy-lifting previously done by humans, aiming for a fully agentic platform [135-139].


The subsequent panel shifted to a broader perspective, arguing that creativity, cognition and culture remain uniquely human strengths that AI cannot originate, and that fear of AI should be countered by emphasizing originality [144-162]. Participants noted that rapid AI advances are shrinking the shelf-life of hard skills, making “applied intelligence” and the ability to create solutions more important than mere coding knowledge [263-274]. They also stressed that AI can democratise learning by providing low-cost access to knowledge in rural and underserved areas, but effective use requires motivation, reliable data and ethical guidance [384-386][411-418]. Ashish Gupta warned that while AI tools are powerful, education systems must teach responsible and ethical usage, and that hands-on creation, not just consumption, builds confidence in learners [301-311][322-327]. The panel agreed that human-centered skills such as imagination, design thinking and cultural awareness will differentiate people from machines and should be nurtured through inclusive curricula [185-200][263-274].


The session concluded with a product demonstration of ENCODE, an AI-driven learning platform that maps individual growth, offers mentorship and adaptive courses to foster creativity and cognition [446-454]. Overall, the discussion underscored that while AI can automate data capture and analysis in safety and education, its greatest impact will be as a digital co-worker that amplifies human creativity, cognition and cultural insight rather than replacing them [92][140-143].


Keypoints


Major discussion points


AI-first transformation of Benchmark Gen Street’s safety platform – The company is converting its 30-year-old SaaS EHS system into an “AI-first” solution, already having 75 use-cases and moving toward “agentifying” them for autonomous action [5-10]. The new Observation Reporting feature lets workers scan a QR code or upload a photo, which the “Jenny AI” agent analyses, extracts the hazard details and auto-fills the reporting form [22-33]. Similar agents support voice-based reporting in local languages [45-55], 5-Why root-cause analysis [61-70], ergonomics risk detection from video [100-107], regulatory compliance parsing [113-122], and enterprise-wide trend detection via RISC-AI, which aggregates all records to surface precursors and heat-maps of risk [124-133].


AI as a digital co-worker that augments-not replaces-human expertise – The AI agents handle routine data capture (e.g., filling forms from images [36-44] or Hindi speech [58-66]), but they still request clarification when context is missing [40-42] and hand over the structured data for human validation [35-38]. In incident investigations the AI guides users through the 5Y analysis, generating possible causes and corrective actions, while the final decisions remain with supervisors [61-70][92-93]. RISC-AI further provides predictive insights, helping safety teams prioritize interventions [124-133].


Human creativity, cognition and culture as the differentiators in an AI-driven world – Several speakers argue that AI can automate tasks but cannot replace lived experience, intuition and design thinking. The “bumblebee” analogy stresses that creativity remains the uniquely human advantage [148-162]. Panelists highlight that creativity, cognition and culture are the pillars of human capital and will continue to distinguish humans from machines [185-199]. Concerns about job displacement and fear of AI are noted, with the view that quality data and human originality are essential for trustworthy AI outcomes [206-214].


Education, democratization of AI skills and the need for ethical, inclusive learning ecosystems – Participants stress that the shelf-life of hard skills is shrinking, urging a shift from “learning” to “making” and applying knowledge [263-274]. Government-run digital-skilling portals, new education policies, and AI-enabled tools are cited as ways to upskill the massive Indian population, especially in rural areas [301-330][384-389]. The ENCODE platform exemplifies a next-gen, AI-powered learning network that maps individual interests, provides mentorship, and fosters creativity, aiming to make education accessible and future-ready [440-452].


Overall purpose / goal of the discussion


The session aimed to showcase how an AI-first approach can revolutionize workplace safety (through autonomous agents, predictive risk analytics, and integrated compliance) while simultaneously exploring the broader societal impact of AI on jobs, skills, and education. The speakers sought to convince the audience that AI should be positioned as a collaborative partner that amplifies human creativity, cognition and culture, and they announced partnerships and product demos that will extend these capabilities into the education sector.


Overall tone and its evolution


Opening (0:00-20:00): Highly technical and promotional, emphasizing product capabilities, efficiency gains, and the vision of autonomous AI agents.


Mid-section (20:00-45:00): Shifts to a reflective and cautionary tone, with speakers expressing concerns about AI-driven job loss, the need to preserve human originality, and the fear many feel.


Later segment (45:00-80:00): Becomes optimistic and collaborative, focusing on education, democratization, ethical use, and the promise of AI-enabled creativity.


Closing (80:00-end): Returns to an enthusiastic, celebratory tone, highlighting partnerships, product launches, and a collective commitment to “keep humanity intact” while leveraging AI.


Overall, the conversation moves from a product-centric showcase to a broader philosophical dialogue about humanity’s role in an AI-augmented future, ending on a hopeful note about collaborative innovation.


Speakers

Speakers (from the provided list)


Naveen GV – Representative of Benchmark Gen Street, discussing AI-first transformation for environment, health & safety platforms.


Speaker 1 – Product demo presenter who walks through AI use-cases for observation reporting and risk analysis.


Speaker 2 – Keynote speaker on design, creativity and the future of work in the age of AI.


Speaker 3 – Session moderator who introduced Dr. Shweta Chaudhary and the panel.


Shweta Chaudhary – Founder & Director of CodeEDU; host of the session on creativity, cognition & culture.


Piyush Nangru – Founder of a tech school; speaker on “creativity, cognition and culture” and their role in Viksit Bharat. [S1][S18]


Speaker 4 – Panelist discussing AI adoption, fear & opportunities; provides perspectives on government-industry interaction.


Ashish Gupta – Professor at South Asian University; speaker on the “orange economy” and AI in education. [S18]


Speaker 5 – Voice presenting the ENCODE creative learning network platform.


Speaker 6 – Representative of an academic-industry partnership, emphasizing design-oriented coding education.


Audience – Various audience members who asked questions (e.g., Saurav).


Additional speakers (not in the provided list)


Chandan – Colleague of Naveen GV, mentioned as the next presenter but does not have a spoken segment.


Garima – Colleague who was to invite the panelists; appears as a moderator/organiser.


Magma Sree – Introduced herself briefly; role not specified.


Ajay Rivalia – Referred to as “Ajay Rivalia sir,” a partner/guest invited for a group photo.


Viplav – Mentioned as “Viplav sir,” a partner/guest invited for a group photo.


Nandaji – Mentioned as “Nandaji,” a partner/guest invited for a group photo.


Mansi – Referred to as “Mansi,” a partner/guest invited for a group photo.


Vijay – Referred to as “Vijay sir,” a partner/guest invited for a group photo.


Unkar – Referred to as “mentor Unkar sir,” invited to join the session.


Full session reportComprehensive analysis and detailed insights

AI-first safety platform


The session opened with Naveen GV outlining Benchmark Gen Street’s three-decade history of digitising environment, health, safety and sustainability for roughly 450 global subscribers and eight million daily users. He explained that the company is now re-architecting its long-standing SaaS platform into an “AI-first” solution, having already identified about 75 distinct AI use cases and planning to “agentify” many of them so that autonomous agents can perform the heavy-lifting previously done by humans [3-6][9-10][5-6][9].


Speaker 1 then demonstrated the first of these agents – the observation-reporting tool referred to in the transcript as both “Jenny AI” and “Genie AI.” Workers can scan a QR code or upload a photo of a perceived hazard; the agent analyses the image, extracts relevant safety details and auto-populates the observation form, eliminating the need for manual data entry [22-33][34-36]. When the image lacks context (e.g., the exact working height), the agent prompts the user with follow-up questions to capture missing information before finalising the record [40-44]. The same agent also accepts spoken descriptions in Hindi, transcribes the audio, extracts the hazard information and pre-fills the observation form for the user to review [45-58][59-60].


Building on the reporting capability, the platform offers a “5Y AI” module for incident investigation. After a hazard is logged, the AI iteratively asks “why” questions to uncover root causes, presenting possible causal branches that supervisors can select and refine. It then suggests corrective and preventive actions aligned with the hierarchy-of-controls framework (elimination, substitution, engineering, administrative) and produces a draft action plan that the human reviewer can approve [61-70][80-90][91-92].


Further extensions were showcased: an ergonomics analyser (“Ergo AI”) that processes short video clips of manual handling tasks to flag musculoskeletal risk points that would normally require a certified ergonomist and can summarise its findings into a ready-made report with a single click [100-107]; a regulatory-compliance agent that ingests lengthy legal documents, decomposes them into individual requirements (≈35 clauses) and feeds these into a compliance calendar for operational tracking [113-122]; and the RISC-AI engine that aggregates all observation and incident records, identifies patterns and precursors, and visualises risk heat-maps that combine incident volume with weighted severity scores, thereby delivering predictive intelligence for proactive risk mitigation [124-133].


Across these demonstrations the speakers repeatedly stressed that the AI agents act as digital co-workers: they accelerate routine data capture and analysis but still require human validation, especially when broader context is missing or when final decisions about corrective actions must be made. The need for high-quality input data and continuous human-in-the-loop oversight was highlighted as a prerequisite for reliable outcomes [39-43][209-216][5-9].


Naveen closed the demo by thanking Sundar for his support and reiterated the commitment to deliver a fully agentic platform within the next year, inviting attendees to visit the Benchmark Gen Street booth for personalised AI implementation discussions [135-139].


Humanity, creativity and education in the age of AI


After the product demonstration, the floor opened for a broader discussion on the societal impact of AI.


Speaker 2 used a bumblebee metaphor to argue that creativity, cognition and culture are uniquely human pillars that AI cannot originate, warning that traditional resumes may become obsolete by 2030 as AI takes over routine tasks [148-162][158-163]. Piyush Nangru reinforced this view, describing the three pillars of human capital and asserting that coding is now “table-stakes” while the ability to apply knowledge creatively will differentiate future workers [185-199][190-191]. Shweta Chaudhary echoed the sentiment, insisting that preserving originality and humanness is essential even as AI becomes pervasive [173-176][201-204].


Speaker 4 (government-sector participant) stressed that AI outputs are only as reliable as the data fed into them and that continuous human-in-the-loop oversight is essential for responsible deployment [5-9][209-216].


Ashish Gupta highlighted the need for ethical and responsible AI use in curricula, citing the New Education Policy and government digital-skilling portals as mechanisms for scaling AI literacy across the country [301-330][332-337].


The panel examined implications for education. Participants noted that the shelf-life of hard-skill knowledge is now measured in a few years rather than decades, prompting a shift from pure knowledge acquisition to “applied intelligence” – the capacity to create, apply and solve problems [263-274]. The ENCODE platform was presented as an AI-driven learning network that maps individual interests, offers mentorship, delivers adaptive, creativity-focused learning pathways, and operates under the tagline “Create, connect, collaborate” [440-452][446-454].


Additional partnerships were announced: collaborations with MEC Connect, Nimbus, and the Next Gen Academy, accompanied by a group photo and a brief product demo [384-389].


The discussion also touched on broader societal concerns: bridging the urban-rural divide through AI-enabled services, addressing tax-remittance and employment-matching challenges for diaspora workers, and building confidence through creation-focused education [379-383][258-260].


Unresolved challenges / open questions


The audience asked whether a timeline exists for AI surpassing human intelligence. Shweta Chaudhary repeated the question, and Speaker 3 (the moderator) answered that no predictive model exists, underscoring uncertainty [285-290][261]. Divergent views emerged on the speed of AI’s impact: Piyush Nangru suggested immediate democratisation of learning, while Shweta Chaudhary maintained that human intelligence will remain superior [278-279][173-176].


Other open issues included: (i) a clearer timeline and policy framework for AI’s potential supremacy over human cognition; (ii) scalable strategies to train India’s 1.4 billion citizens-including those without internet access-in responsible AI use; (iii) robust mechanisms to ensure data quality and continuous feedback loops for safety predictions; (iv) concrete curriculum reforms that embed AI, creativity and ethics from primary school onward; and (v) practical solutions for diaspora tax-remittance and employment matching [261][285-290][301-330][384-389][258-260].


Action items


– Deliver a fully agentic safety platform within the next year.


– Continue ethical AI training for educators and leverage government AI-readiness programs.


– Formalise MOUs with ENCODE, MEC Connect, Nimbus and Next Gen Academy to embed AI-enabled learning pathways.


– Maintain the invitation to visit the Benchmark Gen Street booth for personalised discussions.


In summary, the session demonstrated how an AI-first transformation can automate and enrich workplace safety workflows while sparking a wider debate about the future of work, education and human identity. The consensus was that AI should be positioned as an augmenting co-worker that amplifies human creativity, cognition and cultural insight rather than replacing them. Realising this vision will require coordinated investment in digital skills, ethical governance, high-quality data and inclusive infrastructure, ensuring that the benefits of AI are broadly shared and that the uniquely human attributes of imagination and design continue to drive progress [5-9][148-162][263-274][301-330][124-133][209-216].


Session transcriptComplete transcript of the session
Naveen GV

out a long, lengthy form of information for that to be processed much later by another human in the loop, per se, to really looking at how do we get an experiential learning and an experiential engagement where the intent is to keep everybody safe and our workplaces safe as well, and then obviously have AI be an enabler in how we get this into the system as well as processed, and that giving us the right signals for predictive intelligence. So that’s a paradigm shift that we are looking at, obviously, in today’s times, and the evolution has taken us to this stage right now. So we as Benchmark Gen Street have been around the business of digitizing environment health, safety, and transforming workplaces for the last 30 years, and we work across the world with close to 450 global…

Subscribers. and about 8 million user base using the system day in and day out for various aspects of compliance, assurance, environment, health and safety to sustainability and ESG management to obviously looking at supplier engagement and quality and security as well. So it’s a time -tested active product and the challenge for us over the last three years has been really to transform a SaaS -based system that we have today into making it AI -first. So that’s been our motto over the last three years in how we convert all of our intelligence, learning experiences into giving our customers AI -first philosophy and methodology of engaging with the platform. I think I covered some of this aspect. And in the pipe, I think we have.

I think we have already 75 different use cases in AI that we have available. but I think our next gen is about agentifying a lot of that to deliver the right value for engagement. So with that, I think I’ll invite my colleague Chandan to take a shot at helping us walk through the use cases and really talking through the value proposition of how AI is here to change the way we do stuff. So, Chandan.

Speaker 1

Thanks, Naveen. So now we’ll walk you through, I would say, a story of someone who is working at a work site and how they really look at the different risks and hazards at the work site. So let’s say I am just walking into my work site, and this is something that I see the moment I kind of walk into a construction site. I look at this scene. and I know something is wrong here. I am not entirely sure what is wrong here. I am not sure what kind of safety rules that they are violating. But I know, I get a sense that something is wrong here. Traditionally, in a very traditional sense, what Naveen spoke about, you know, earlier what used to happen is that I am supposed to kind of, you know, go find a form, fill it either, you know, manually on a paper or, you know, find a portal, look at a form and fill it, understand what is the type of risk, the type of hazard that I want to report on.

But now let’s look at what the transformative way of looking into these hazards and risks. So I will go online. What you are looking at is one of the programs that we have. We call it observation reporting. And this is something that is used for engaging people in reporting. Reporting any kind of health and safety concerns that they have at their workplace. What a worker can or anybody for that matter they can really do here is look at let me just I think I am not connected to internet so just give me a second to connect it back but essentially what I can do here as somebody who is using this kind of platform and AI technology I can very easily scan a QR code on my phone or scan a direct photo of something that I am seeing and then look at sharing it with the agent that we have we call it Jenny AI and the moment we kind of share it with the Jenny AI the agent can really process and look at all the different health and safety related hazards that we have and kind of fill that form on my behalf.

So let me let us take an example we will look at the same photo that I was showing you earlier and let us see what that overall process will look like. So assuming I have already captured the photo here. I will locate that photo which is in my phone or my computer. And let’s see what really happens here. What you will see here is the Genie AI, which is the agent, AI agent. It will pass through the photo which is uploaded. And that’s what is happening right now. It is analyzing the image input. It is reading through the intent of the input which has been provided. And it has filled the entire form on my behalf.

I did not have to go and tell it or describe the hazard. It says that there were a couple of workers or two workers who are working at site. And they do not seem to have any fall protection equipment which is there. Now, of course, we do understand that the AI is just looking at the photo. It does not have broader context at this point in time, which is where it will also ask you, do certain things that it does not care about. So right now it is not very certain that while the people are working at height, it’s not certain of what is the height that they are working at. And it will ask you some follow -up question that you can kind of really provide.

But this is where it will help you, you know, update most part of your form, if not everything. Now let’s, you know, assume that I don’t have a photo. I am there in the site and I just want to kind of go and report something that I saw. And I do not, I am not very fluent in, let’s say, English or the corporate language that we use. So I want to do it in my own, let’s say, language. So I am going to use an example. I am going to speak or describe what I saw in Hindi. And let’s see how the agent will respond to that. So I am going to say, I am going to speak in Hindi.

And I am going to say, I am going to speak in Hindi. And I am going to say, I am going to speak in Hindi. And I am going to say, I am going to speak in Hindi. So what I did just now, I spoke in Hindi and I described that I saw, you know, a bunch of people working it. And now let’s see what’s happening here. So the AI assistant, it is analyzing the voice, what I spoke in Hindi, and it is kind of, you know, trying to put that into the form, into an structured data for me to again go back, validate, and then submit it. So what, you know, how it really helps is, let’s say I do not have the safety inspector’s lens or competences, but I still want to contribute and I want to report things.

This AI can really help you, you know, put things in. So you can get the perspective in the right structure and get the data into the system. now let’s say you know I have reported this I saw two people they were doing something which was not really safe and it was reported into the system what’s next? The next step is for us to really understand why they were doing it and that’s where the incident investigation comes into picture it is a process for the industry to look into what really happened and then understand the root cause behind it and that’s something that we do here using the other AI that I want to talk about which we call as 5Y AI analysis and 5Y is nothing but a way of looking into what exactly happened and why it really happened and we keep asking question you know as to why it happened so in this example you know two people were working at height they were not using any safety equipment then the question would be why they were doing that you know were they not trained about it or were they not really, you know, given that safety equipment, right?

So, this is how you look at all the different reasons which really contributed to that particular incident. Now, in this case, typically when we do it in a very traditional manner, what it needs is, you know, multiple people who have years of experience, they collaborate. These are the, you know, team of cross -collaboration, you know, experience. And then they look at all these reasons. But in the absence of that kind of experience, this is where, again, AI can be used as a digital co -worker. So, in this case, the AI is helping me kind of articulate what really happened here, and then it will support the entire process of conducting a YY analysis. So, the moment I click on suggest, it kind of opens up a separate form, takes into account everything that has been reported here from a context standpoint, and the moment I click on…

generate Y statement, it will give me different branches, different options, which I, as a practitioner, I as a supervisor, I can really pick at and then conduct this analysis. And this is the process that I will kind of, you know, go and repeat until I reach to that final Y as well. So this is again, like I mentioned, the idea here is that even if someone does not have that kind of experience, it can use the LLM, the large language model, which are, you know, trained on the latest datasets. And that’s something that can, you can really use to, I would say, substitute the experience part of it. Now, let’s say we have investigated this.

And now we need to also figure out what do we do to really, I would say, repeat the recurrence of similar incident, right? Two people were standing on a drum, they were doing something they were not supposed to do. We investigated it, we understood that, you know, maybe they were not trained, maybe they were not given the right kind of equipment. So, now we need to look at what should be done to really, I would say, correct that. Typically, when we talk about corrective preventive actions, there are different controls that we talk about, right? Not all controls are same. There are certain controls which are more structured, more powerful. We call them, you know, we identify them as hierarchy of controls.

So, in this example, when someone is working at height, the first type of control that someone would look at is the elimination. Is there a way we can eliminate this risk altogether? If not, can we substitute it with a less hazardous risk, right? Instead of having two people climb the height, can we do it through, you know, maybe something else? Maybe we bring in a forklift or maybe we bring in a Caesar lift and we do that activity accordingly. And then we talk about the engineering. Engineering control and then the other administrative control that we have. So many times what happens is, you know, when people are thinking about these controls, they don’t really have a very structured thinking in identifying these controls.

That’s where we have this option or this AI agent which looks into the details and then across the hierarchy of controls which should be applied, we can look at generating those different type of controls. And that’s what you are seeing here. It is giving me a very good first draft on what are the things that I should be doing for preventing the recurrence of similar observations, similar incidents here. So just to kind of recap, this is how AI can really help people in not just understanding the context of what they are seeing at the site from a risk perspective, but also look at understanding the root causes behind it and also come up with the corrective preventive.

actions without, I would say. you know, of course, it’s not again, a replacement of human, but it is a digital co worker that you can have in your pocket and which can really guide you through the entire process that we have. Now, let’s look at the other example here. I think we spoke about fall from height and the risk that you saw there is very, very evident, right? You saw two people who are standing at, you know, maybe three meter height, and there is a risk of them falling and you know, sustaining a fracture. But there are other risks when you work in industry, which are not so visible. And one of those risks is ergonomics risk, risk, right?

It depends on, you know, what kind of activity that you’re performing, right? What type of movement, the body movement, the manual material handling that you’re doing, and it creates a strain on your shoulder on your backbone, and so on, so forth. Typically when industry you know run these programs they need people who are actually trained on these guidelines some of these are called Reba Neosh guidelines and that’s where you need someone who is certified ergonomist to really look and identify those hazards. If you are a remote site if you do not have a certified or trained ergonomist this is where the AI can be really helpful and powerful. All you need to do is take a video clip of that particular activity which is being done and then you run it through this AI agent and it can really help you identify all those risk points that you have.

So in this case what you will notice here in this video is a person who is standing next to this conveyor and his job here is to pick these boxes manually and place it back on on this conveyor. So it might look you know a very very simple activity but if you keep doing this for one hour, two hours, six hours, eight hours a day there are a lot of risks that you are exposed to from your ergonomic standpoint. So if I just kind of run this video, you will notice that the Ergo AI agent is looking at all those pressure points and trying to kind of identify those risks which you are not able to identify unless you have gone through that rigorous training of being an ergonomist.

Once it is done, you can also look at converting it out and generate a quick report here. So the moment I click on summarize, it takes all those learnings, those analysis, and it is creating a ready -made output for me to kind of go and share with the relevant people. So this was an example of, I would say, ergonomics. Now, let’s also look at another example. And I think Naveen spoke about how we are kind of transitioning from having AI as a standalone functionality or feature to now looking at the AI. So if I go to my software, I’m going to go to my software, concept where the AI functionality works in the entire ecosystem and focus on autonomous action as well.

So it’s not just about the inside, but it is also about taking action on behalf of human, of course, within the certain defined guardrails that we have. The example that I’m showing you here is of a legal compliance. Typically, when you are in an industry, you need to go through multiple type of regulatory compliances that you need to report on. I’m taking one such example of a regulatory requirement from one of the steel industry and feeding this information to this particular AI. What it will do is look at consuming this entire information and it will then deconstruct it into different requirements that we have. And this is where you will see that the agent here has deconstructed it into almost 35.

Individual requirements that the industry is supposed to comply with. At a click of button, we can also take all of these requirements into a tool called compliance calendar, which is where these requirements can really be operationalized. Right. You can, of course, interact with this agent and, you know, ask specific questions or give specific, I would say, directions. Also, in this case, I’m asking you to do a quick synopsis also, as well as taking a quick way of auditing this entire activity. So this is where the single agent is kind of, you know, connected and working with multiple of the programs that you have within that defined ecosystem that we have. Now, the last piece and, you know, one of the most important piece that we wanted to share with you is all of these individuals.

Individual AI components that we saw, they tell you, they process a single record and they tell you a story about that particular record. but what if we want to understand the overall trend and the story that all of these data points together they are telling us that is where the RISC -AI comes into picture it looks at processing all of the records that you have each and every record which is logged into the system across different programs whether it is observation whether it is incident, it is kind of processed and that is where it helps you identify the patterns the trends of precursors things which can go wrong so I think again going back to the example that Naveen used of Bhopal there were of course many many precursors before that incident happened in terms of maintenance in terms of safety culture but all of them probably went unnoticed so this is where a system like RISC -AI is extremely powerful extremely helpful which really helps you see kind of trend and help you take a preventive action also The other aspect that it can also do is help you visualize the different kind of risk that you have in your organization.

So using this chart, and let me just refresh this for a second. But what you will be able to do here is use a mathematical model to assign a severity of different type of risk that you have in your organization and visualize those on a heat map. In this case, what you will see here on the x -axis, I have the volume of those different risk categories which are captured. And on the y -axis, I have the overall weightage risk score. If I just talk about some of the examples here, for example, slip and trip risk, you will see that the count, record count is on a higher end. There are almost 75 records which are tagged to this category.

But the weightage risk score is comparatively low when we look at some of the other components here. Such as fall from height, because it is taking also into account the inherent risk that you have in that particular activity. So this is how it can really generate a very powerful picture of helping you understand what are the areas that you need to kind of focus on from a prevention standpoint and also provide you a bit of predictive intelligence about what and where you should focus next, both in terms of the different part of your workplace, organizations, as well as the different kind of activities that you’re having in the system. So with that, I will now invite Naveen back on the stage to kind of, you know, Naveen, anything else that you want to add from a closing standpoint?

Naveen GV

Thank you, Sundar. I think this is great. I think I had a few friends come up to kind of comment on the demo, which I think they could relate to a lot more, I would say, from an industry standpoint. But overall, I think as benchmark Gen Street, I think our journey this year is going to be focusing around some of the stuff that Sundar showed. which is autonomous agents being able to do a lot of the stuff, a lot of the heavy lifting that is required by individuals who were earlier wanting to engage with a platform and type in all of those details which are required. So I think with that, I think hopefully we’ve been able to do some good justice in helping you understand how AI can transform a function like safety and look out for us over the next year or so in making it a completely agentic platform.

So with that, ladies and gentlemen, thanks a lot for your time. We do have some time for questions if you do, but yeah, otherwise we have a booth back in the room when we can have a lot more personalized conversation if you’re specifically interested. So thank you. Any questions, please let us know. All right. Thank you again. Bye.

Speaker 2

A bumblebee cannot fly but it still does. The thing is that when this statement was made in 1930, we understood very little about aeronomical designs. And by 1980s and 1990s, more research came up and we realized that okay, a bumblebee can truly fly because its wingspan and body weight support a variable flying measure. That is what design does. And AI only understands what we know of design today. not what we can create with it tomorrow. Creativity is today’s human advantage. AI can generate, AI cannot originate lived experiences. The context, the culture, the emotion, the meaning and the human intuition matter more than ever. Good design is not about a good drawing. Good design is about good solutioning.

Good design is not about beauty. Beauty, good design is about good solutions. And good solutioning needs good understanding of design. And hence, I make a very provocative, bold statement today that resumes are going to die by 2030. Because the skills that you may have learned today, I may have learned so far today, may become irrelevant. AI probably will be able to do everything faster, better and at a much lesser cost than us. So then which is that? Not one skill. that remains extremely, extremely important. Design and creativity. The workforce that we need 5 years from now, I’m not even making a bold statement by saying 10 years from now or 20 years from now, which can think across disciplines, which can collaborate with machines and not compete with machines, which can communicate visually, which can learn continuously, which can adapt to context, to cultures, which can build and shift fast and which can adapt without fear.

And hence, being human becomes your advantage, even in the age of AI. And how do you become more human towards solutions is by the essence of imagination, that is creativity and design. Good afternoon, I welcome you all to this session hosted by Code. And now I welcome my colleague Garima, who will invite all the panelists and we’ll continue the discussion. Thank you so much. My name was Magma Sree. Thank you.

Speaker 3

And we are proud to have Dr. Shweta Chaudhary, founder and director of CodeEDU and host of this session, a leader working at the intersection of creativity, learning design and future ready education ecosystem. Thank you all for being here. And I would like to thank Dr. Shweta Chaudhary for his time .

Shweta Chaudhary

Hello friends and thank you for being here with CODE, the Centre for Originality, Design and Expression. Why we need it? I am thankful to Umang for setting the stage that yes, the bumblebee can still fly. So what is it that in the age of AI will keep all of us the way we are, the humans? I am privileged to have with me an August panel which has been through various walks of life as a student from esteemed institutions, as workers, colleagues and administrators into institutions of high repute and public administration, as founders who have struggled and evolved built systems which have handled talent at a larger scale. So let us hear from them what it means to them that why and how the human intelligence will stay, should stay, has to stay in the age of artificial intelligence.

I would prefer to begin with Sir from Sunstone, Puneet here, Piyush here, sorry. Piyush sir, what does this word creativity, cognition and culture mean to you as a person? That’s the best way to introduce ourselves. Rest of the chat, GPT tells about us. So today he asked. He can’t tell.

Piyush Nangru

I think these are all the pillars which define any human being. Because whenever we talk about Vixit Bharat, whatever we may think of, we might have things like GDP coming our way, but at the core of it, it is the human capital. So, whether it is personally us or we talk as a nation, creativity, cognition, culture would always be the key pillars. As a founder of a tech school, I can tell you that today coding is no longer a skill. It’s table stakes now. Because how you apply that, how you solution to that, that creativity works. So, I hope this will help you lead the way forward. You can ask any question, write me an essay, create me, you know.

Give me these points. But if you are not prompting the system again, you are not challenging your cognition. You are not challenging your thinking process. And thirdly, culture. I think we have a big 5 ,000 -year -old heritage to live with. There are more than 22 languages, innumerable dialects. To be able to take that along, to be able to understand those nuances is also very important and not to be ignored in this AI -led world. Therefore, no matter where this, and it is quite evident that AI is shaking up a lot of things and will shake up a lot of things. But the power of creativity, cognition and culture is here to distinguish. what is human led and what is mission led.

Shweta Chaudhary

Perfect. So it’s not about the countries or the continents that are fighting to fight about who owns it but it’s about the human beings of those countries and continents who will be owning it. So let’s keep us intact and keep our humanness. Thank you sir for that take. Let’s have sir who comes with a background of public administration. Sir, how do you think that creativity, cognition and culture in this setup really keeps you intact or how does this value into your ecosystem?

Speaker 4

Thank you very much. First of all, I thank CODE for organizing this beautiful session and the kind of passion they displayed in insisting upon me to come here is laudable and that’s why I am here and I would also like to tell everyone that I liked what Umang said in the beginning very much. Beautifully he presented in a very brief thing. I told Umang personally that I liked what you said. so you know i’ll tell you how i mean creativity cognition and culture why it matters i was going around this stall i also came day for yesterday for something else so i was trying to meet this this floor has a lot of government ministries so i was trying to meet trying to find out if there is some officer from any ministry i didn’t find except in ministry of skill development stall i met a lot of youngsters consultants other people so i said are you worried about ai or are you happy about ai more often than not i said we are worried sir i also across you know across the spectrum i try to meet a lot of people and talk to them engage them it’s great fun and great learning actually so i found that you know there is a lot of fear and i found that you know there is a lot of fear and i found that you know there is a lot of fear about ai and people are actually not very clear about what kind of changes ai will bring And in this kind of fear and anxiety, we should not forget that our originality, our USPs as a human being, they will matter much more than today in the world of AI.

Because AI is what the data tells the tool or the bot to give us. If the quality of data is not correct, for example, today if the data is not reliable, the results will not be reliable. So one of my friends said that, sir, AI is not like you ask a vendor to give an AI solution, he gives you and he goes. Like an IT solution used to give a software, a computer laga diya, ho gaya. It’s a continuous engagement to improve the results. Because the AI bot will improve its own results, different habits, colors of skin and languages. I think that will matter much more in the future. Thank

Shweta Chaudhary

A very beautiful take as put up by sir. One thing that’s most resilient is us. amongst all the crowd or amongst all the stages that are set up. Something that differentiates is our originality. Yes, and that is to be kept intact. Coming to some very beautiful solution, I would say, or innovation to education comes from a university which itself is formed on a very innovative format of education. I would request, sir, to define his definition of creativity and cognition.

Ashish Gupta

Yeah, thank you for this opportunity. So, as an educator, when we jump into this new term called orange economy, so the new orange, how the orange would be. So we have seen oranges, but we have not seen orange economy. So the new terminology came which defines what creative, how cognitive, and the culture immersed together to define the human being. I represent South Asian University, which is the first university in the world set up by Sark Nation. where students come from all eight countries. So people come from Nepal to my university, people come from Afghanistan to my university, people come from different destinations. We represent Asia. So within Asia, are we same by cognitive thinking? Within Asia, we are same and we are sharing same culture.

So culture to what extent? The same culture. Are we different in culture? When we say creative, Indians are more creative than the neighborhood. Neighborhoods are more creative than Indians. So when I look at this international perspective to my institution, I engage with more students and I evaluate critically what you do better. So as an educator, when I say I am in the age of AI, we are already in the immersion of the AI. People have already started AI. Students are already using AI for different tasks. So when a student comes to me, sir, I have done this work, I ask, show me the problem. I first thing I ask show me the prompt so it is not the skill that you have done assignment you have done reports it is done by GPT your skill like that you said that your skill is how you define redefine your code how you build the skill to understand the code coding is not a new thing now lot of training usually happen but what remains us creative human being how you apply your creative brain that always you can’t beat technology always assist you I believe with my personal opinion technology always assist you technology bring efficiency in you technology support you but the decision -making is always lies in the human brain are we are using the technology responsibly we are using technology ethically there are so much concern as a educator I have to train to my new human resource And I believe this new orange economy will give lot of opportunity in the time to come to the people who feel if my job is displaced, I’m sure some new job will come that you have to learn, you have to survive and you have to adapt with the change with AI time.

So that’s my perspective as an educator.

Shweta Chaudhary

Yes, sir. Beautiful perspective where he tries to say that culture and cognition and creativity bind us as Asia. That’s a very beautiful definition and separates us from the Western world and they tell us that this is what keeps us together. And at the same time, this is what will keep us going will be the cognition. Thank you, sir. Coming to Satya, sir. Coming from an IIT as a student, getting into a technical field of learning, engineering, and then coming to administration. I mean, all these numbers and domains, or I would say background, sound very redundant and sound very boring. So how still creativity stands with you, creativity, cognition and culture keeps you going.

Speaker 4

Thank you so much, Swetha. Thank you very much, panelists. And I would like to thank that this question is being asked. And I just want to describe the way we are sitting here in this hall, right? After every one hour, this hall is changing with the audience, speaker and everyone. And this is the standard setup. I would like to give the answer for all the three things. This is the standard setup is being provided for all the type of stakeholders, be it global, different types. The type of creativity we are putting or hosts are putting into this, you will see after every hour, this is being changed. And the kind of cognitive inputs or… discussions are being done must be different and it will creating level of the discussion and reception of all the stakeholders and in the similar manner the kind of culture both the sides be it speaker side or audience side or host side it will be literally different so my main just to answer this question this is a very subjective and with respect to AI AI is artificial intelligent it will always be artificial and human intervention human inputs humans with inputs of human creativity cognition and this culture will always surpass the best thing I would like to agree with professor Ashish that the kind of input prompt and expertise of that particular field which always surpass and that’s why I was discussing with Upade sir that everyone is scared, children are scared I say that there is no need to be scared with them, on these things in fact, their level of artificial intelligence will be very good they will be able to give their input through their input so there is nothing to worry about

Shweta Chaudhary

so nothing to worry about friends the walls are going to remain the same we are the emotions into those walls and that is what is going to make the difference so let’s keep the emotions and the humanness intact, I am happy to say that all my panelists here strongly say that human intelligence will supersede the artificial intelligence let’s talk about the audience, anyone who feels that the artificial intelligence is going to top us and the humans are going to stay behind or all of us are on the same page let’s start with you audience, any take on this? human intelligence or artificial? what comes first? all agree to the panel yes sir please

Audience

you mean to say there will be a timeline where this human intelligence will cease to supersede it’s a time bound super time bound position basically as AI improves is there a timeline like after certain years that AI will be better than human intelligence

Shweta Chaudhary

please say your good name please so Saurav says there is timeline to it

Piyush Nangru

I think certainly there is merit to the line of argument which you are making so as AI becomes more and more intelligent our education system is continuously under stress. It is being tested. There is a stress test happening all the time. What’s happening is that the shelf life of hard skills is really diminishing. Like if I could hold on to a skill and take my whole life career with it then it came to 20 years and now it’s a matter of couple of years, three years, four years and so that the shelf life of hard skills is really shrinking. But where we need to focus is that not only making. Earlier we used to say that don’t learn but make things.

Now it’s not only about making things. You have to understand the meaning of it. You have to apply it. Rather if you can say that not only artificial intelligence but applied intelligence is where humans are going to really be there. Because I can tell my student to code. They can make a chatbot, right? But can that chatbot tell, let’s say, a farmer in MP that whether he will be able to sell it at a good price or not? So the application of it, the solutioning of it will matter. And again, the timeline, I think it’s right now not an easy answer. But that’s the direction where we can at least all go and we know that we can expand it further.

Shweta Chaudhary

Okay, thank you. Yes, maybe that’s one of the reason why we all talk about the fear. Is that a timeline or it will be the resilience that will go forward and the humans will become smarter or the artificial intelligence will become smarter, yes? So it’s a generation that all of us are going to see and go through. So let’s keep ourselves crossed for it that we will remain the smarter ones as the panel says or the generation sitting on that side, many of the young faces don’t even want to answer. Because they are waiting and trying to tell us. that every next is smarter one.

Speaker 3

Yeah, so actually, as I was mentioning you that I was interacting with a lot of youngsters and in the overwhelming feeling that I got from all of them, apart from fear, is that, you know, everybody is very unclear and unsure about how the AI will shape the world in the future. I mean, he said about timeline, which could be 10 years, 20 years, I don’t know how many years, even I don’t know. Nobody, none of us are sure about how things will actually unfold in the future with more and AI systems being more and more smart, data becoming more and more strong. And so there is a lingering fear among everybody in terms of what will be the impact as it unfolds because we actually don’t know.

And there are no mathematical models or something which can predict how things will unfold. But as I said, as an administrator, as a public policy person, for me today if somebody asks me a simple question that tell me what should be the purpose of having ai i would say that you know when i go to a village i find that you know there are a lot of people sitting without any work they don’t have any money in their pockets they all look for some kind of employment and they are not very well educated so they are very poorly educated in the village school you know a lot of absentism happens in government schools in the rural areas people don’t go or whatever you know i mean all of you know this so for me i mean the for example if i talk about ai in education so i should be able to use the ai bot or ai tool to examine a person’s background very quickly and find out what best i could skill i can give to him so that he or she can fend for himself or herself have a decent job And I don’t think of anything bigger than this because there is an army and army and army of people who have no job.

And that scares me more than the AI because in the future, if in such a big country, so many people will not have work, then this may lead to social imbalances and problems. So, for me, AI should be able to, and AI does it actually, because in a class of 50, 100 students, 40 students, AI, with the help of AI, we can find out about each and every boy and girl’s learning abilities, how much quickly they can learn, and then we can design programs for them, which a human being, a single teacher cannot do. So, anything which we do with hand, for example, plumbing or repairing a vehicle or doing any hardware kind of. Everything AI cannot do.

It has to be done by hands. Robots can do one day, maybe, but then. when will that time come again I don’t know so these are my takes as far as our country is concerned or South Asia is concerned because today you know we are having neighbours Nepal, Bangladesh, Sri Lanka, Pakistan in this neighbourhood we are sitting if any one of the neighbours as we have seen in the past is disturbed the country gets disturbed so in our own interest and there is too many people all around in South Asia so if people can get some kind of job as per their abilities with the help of AI that would be the best

Shweta Chaudhary

application so friends yes that’s an important take to understand that where the human capital will go and what will be called the human capital how does you keep it sustained yes ma

Audience

so I want to know in a developed India we youngsters from 18 to 25 years for us it is very difficult to understand it is very easy to search a lot of things on YouTube search on chat GPT but what about our parents what about our kids what about the people who are like right now in the age of 2 to 3 years and they are learning now they are depending on chat GPT or my parents are afraid of chat GPT what can happen in AI how can AI be a fraud when are we going to teach them how to use it 140 crore people when will all of them be trained how to use AI

Shweta Chaudhary

I mean I would just say how did you teach them how to use Instagram they haven’t learned yet people are still afraid so the easier it is the easier it is it’s all about intensity education has something I will talk to everyone about it education requires an intent for sure if you have an intent then maybe yesterday a teacher first your mother and then the technology is training you to become there to reach there we will take this question again with our panelists also and also discuss about human capital per se which is the human capital human capital is scaring us and when will we be able to train it and going forward how will we be able to make this human capital so sir as a professor how do you look at professors to become better with AI?

Or what’s your take on human capital of educators?

Ashish Gupta

Yeah, so this is an educator’s dilemma. To what extent we support the use of AI and the more important ethical and responsible use of AI, right? So the question came, when do we have to teach people to use AI? So as an educator, when we see, do we have to start from school? Or do we have to start from college? Or when we go to higher education? So the most important foundation comes from the school. If you look at the kids now, they are more gadget friendly. They have tablets at home. They have IPTV at home. They have mobile at home. The kid has WhatsApp too. The kid operates WhatsApp on his own. The kid has his own group of school.

Teachers put topics in it. The kid asks the meta, that I want this meta, I want an article on this topic, of 100 words, and the meta scripts it and gives it. and he copied it from there, put it in a notebook and gave it to the school. This is not a challenge, actually. People will learn. People will learn by default. People will learn by training and people will learn through pressure. High work performance environment, where you will have to learn that technology. The problem in school is, how much ability we are able to use. That is cognitive thinking. Has that student used as much brain as the teacher said, this assignment has to be done.

He copied that much and put it in the meta, or put it in the GPT. He did not make the effort that we used to make at our time, when we used to search by ourselves, open the book by ourselves, make the notes by ourselves, and he is doing all the work in GPT. So the question came, to what extent cognitive skill remains strong into the market, learning is not a challenge, learning is not a challenge, learning is not a challenge, learning is not a challenge, learning is not a challenge, government of India is also taking lot of initiative through digital skilling. Kahi saare portal create kare kiye gaye hai jahaan par government khud ye chaat ye koi bhi citizen of India un portal par jaa kar apna registration in fact I did registration yesterday only to AI readiness.

Ki main AI ready kaise ho sakta hoon. Right. I know something of AI but I may not be perfect of using the AI. So that is my ability to learn fast. Government is has launched such programs through digital skilling portal jahaan par wo free mein apne citizens ko wo training dena chaati hai. Second perspective new education policy. Government is constantly trying to make over, re -look into the perspective of policy but again the challenge comes kya hamare school AI training ke liye tayyar hai. Infrastructure AI ke liye chahiye. AI lab humko chahiye. So learning, people are willing to learn. There is no resistance to learn, but how to support. And more importantly, I always emphasize ethical and responsible use of AI.

One more example. A few days ago, people started creating their image from Ghibli. Who taught them? Ghibli images were… Millions of images were created by family, housewife, homemaker, cooking, restaurant. Everyone put their DP on their WhatsApp. Everyone put their DP. At that time, we didn’t think about privacy. Where will that photo go? We didn’t think. Instead, we thought of creative images which has been created. Now people think about GPT. A carrier creature should be created. Chat GPT will create everything which you do. Who taught? We have learned it by learning through enemies. A friend told us. YouTube told us. Chat GPT taught us. Be clear. How to create? How to create? caricature. Yes. So indeed the human human capability to adapt is an important aspect.

Audience

Hi, thank you so much for this valuable inputs like government is there, universities is there. So my question is like how we are thinking about this ruler areas. Like they say already there are many universities, there are many colleges. But still after 12 candidates, first year, second year, final year candidates is not getting that much knowledge like what is going on in AI. Now people don’t even know what is to be studied in AI. It is not clear yet. How does AI work? How AI engineering, feature engineering, this data science, data analytics fields actually work? So how does this field how does this ruler and how does this ruler and urban is fine. How will it go to ruler areas?

Like I belong to a city. Murtujapur, whose name is also unknown. So there are such children who want to learn but they don’t have money. So what is the government is planning for them? Universities which are fine, everything is best. How they can shape these underprivileged candidates economically backward class.

Shweta Chaudhary

So, yes please. That’s a very beautiful thought that comes and in a country like ours where diversity is huge we need to understand that not everyone has that take. We would also want to listen it from both our panelists sitting on both sides of me. One coming from the government and one who has developed education in a decade. Seen India growing with it. So sir, what is the systematic approach and what it has what is the process to make all of us understand that this is new but it will not stay new. When the efforts keep going continuously, we reach somewhere. So, sir, a portal like GEM, a government -y marketplace where you have been in the part of the system right from day one of inception till today, how do you really think that the people have to trust on something of a government initiative that it will reach and become everyone’s kitty?

Speaker 4

Any person who is having a GST number, they can onboard, register on this thing. And they can participate in various products and services of the government. And in that, as madam is asking, to validate and recognize it, all the vendors, if they have any services and products, they can be onboarded and as a reliable product and source services, we properly do vendor verification and after that, we onboard its catalog. And if all those products are to be purchased by the government, services are to be purchased by the government, then tender process can be done there and can be purchased through direct market place. So I can just say that, if I am correctly pointing out taking your question ki jo ye services aur products banenge agar AI entrepreneurs jo bhi hai jo unke product aur services banenge toh wo sarkar unko sahi madhyam se leh sakti if I am able to take yes there is a system that has to start a day 2016 jain shahid JEM shuru hua hoga humne nahi socha hoga ki that we will be able to reach to this level where every smallest of the manufacturer or vendor from the rural area, the urban areas and every part of the country can become a part of a sales or a buy, sale and buy purchase platform of that scale so it takes time for sure, technology integration and adoption and also making it a part of mainstream is a process and journey let’s listen it from Piyush Piyush how do you say for such a diversified scale the question here

Piyush Nangru

sso I think AI apne aap mein ek bahaat bada democratising tool hai learning ke liye hume kya chahiye hota hai? Ek self motivation chahiye hota hai aur ek koi madhyam koi medium jo hume koi padha de jo samjha de Now this part of the problem is solved by AI Right? If you have motivation and the assumption being we all know internet is everywhere now Specifically for tier 3 towns for rural areas Now anything and everything can be learned as well as anything and everything can be built Now you will see that as a trend we will see lot of solopreneurs single person small setups kyunki aap website khud buna sakte ho creatives khud buna sakte ho marketing content khud likh sakte ho everything you it is now more empowering rather so I think it is only more democratizing and more empowering for the rural India you can build things of your own you can have aspirations which earlier needed lot more things which are now possible.

Speaker 4

I would like to give, as a parent I would like to give your answer. I have two daughters. My elder daughter is in NID Ahmedabad. And in creativity, NID Ahmedabad is quite good. And my younger daughter is in 11th class. She is a player and plays in 11th class. My elder daughter, because she didn’t have much, I asked her how you are going to deal with AI. She said, I don’t have to deal. The answer she gave me I want to tell you how the children are thinking. Our curiosity will be reduced. Professor will also be finished. She said, I don’t need to be afraid. I am studying. I will learn that first. As long as this tool is made, it will be for all of us.

I have to understand how to use that tool. And for that, until I don’t read the subject well, until I don’t understand the subject well, there is no benefit. So I don’t have to fear about what is happening. And the younger daughter, because in her college, as you said, sir, because in addition to the responsive and ethical use of AI. So once I said, how do you do notes these days? So she said, in our college, a lot of my school children use chat GPT. So I said, you don’t use it? So she said, if I use it, I won’t be able to use myself. So this was the answer from her. So in addition to what sir, professor sir has mentioned, we have to educate them for ethical and responsive use of such kind of tool, not limiting to AI.

Ashish Gupta

So perfect, sir. I would like to add one answer to his question. That. Rural urban divide has always been a challenge in India, right? I remember the time of internet 2k internet launch hua it has the perceived that internet kaha tak jayega pehle shehron tak tier 1 metro then now you see the penetration of internet to aaj gaon tak internet pohucha wo infrastructure create hua government ne utna scale up kia telecom companies ko utna strong banaya ki wo us market tak pohuchka aapko aaj ek affordable rate me data de paye right aaj rural humari population hai that plays significant role in terms of economy in terms of business in terms of small business also and I am agree that education is also a fundamental right for the student who live very far areas of India AI ko bhi wo infrastructure chahiye AI ko bhi scale up karne me utna time chahiye the way internet penetrated slowly AI also in its studies in school, in course in its curriculum, in its lab it will assimilate slowly things moving we always look at where technology comes from, where technology is made and how technology diffuses technology sometimes don’t go immediately to every places, sometimes there is systematic movement of technology from one place to another place one more thing should we fear?

it is better to learn the things we should not afraid of whatever challenges us we should adapt to it better the version is the things which challenge you, it is better to look at it, adapt it and find out the new solution that how we can bypass how we can surpass and how we can head to head head to head competition with the technology right Maybe there is one question by Saurabh That with time technology will become so strong That it will surprise human beings Maybe it could be When you as a human being Feed the LLM When you as a human being You feed the LLM What could be the possible situation of a cricket shot Then the Chetji PT will tell you You have to play this shot Which is unique Because you have to feed the LLM first How LLM will know The large language model Has to understand the human What human think first Then the LLM work in the background For the answer to be the correct one The question has to be the correct one And the question has to come from the audience And the people How do we take it forward

Audience

Yes sir Namaste Thank you Thank you for a wonderful session I joined in late It was a little far from the other one So for India to become the India of the world we are talking of a 4 trillion economy 7 .3 trillion in 2030 it could easily be 10 trillion if we start trading our people we are not trading our people so if anyone of you are connected with policy makers Indians who travel outside India should suppose I travel to the UK and I work in the UK and I pay 40 % tax in the UK then 33 % of that 40 % should come back to India if I want to maintain my citizenship of India we will become 10 trillion economy just by this if Donald Trump can play everything we can also play so that’s question number 1 if anyone has any comment on that question number 2 is someone mentioned about motivation for education and the medium for motivation motivation comes and goes I think I would like to hear all of you all all of you all of you The power of confidence and does stereotyping in the education format that we have today, does it enhance confidence or does it kill confidence?

Because I meet so many students across the country. Last week I was in Hubli, I was in Belgaon, you know, I traveled to the depths of the country. And you’ll meet students who are fantabulous. People say we have Gen Z right now. I think we have Gen X, Gen Y, Gen Z, all 25 year old, depending upon the geography they are in right now. So fabulous students, but low on confidence. They are not from the city. So what does the education system do to multiply and enhance this confidence? If confidence increases, then it will do everything.

Piyush Nangru

So I’ll take the second question. I think as an education system, we have to move towards a more inclusive education system. from learning to creating to applying. And when we create, when we make things, there is a different kind of a dopamine release which is there and it also gives you confidence. If I have built a working prototype, which right now, currently the education system by and large is not really supportive of creating things. It is about learning things. And today’s discussion where we were, you joined in late, is that even this is not sufficient now. Now we not only need to create, but we also need to apply it. That is it useful? Okay, this thing exists and it works.

But is it useful for someone? This is the next level because with the AI coming in, this is where we need to be. But by and large, to answer your question, I think the confidence really comes by creating and not just by learning. I think the confidence is the key. I think the confidence is the key. I think the confidence is the key. So if we have more and more creating opportunities, building opportunities for our students across the board in education, across programs, across the board, I think the confidence level, and this is right from K -12, I am not even talking only of higher education, this is right from K -12, I think more and more creating is what we need to really instill.

Shweta Chaudhary

Thank you, Piyush. We would say that, yes, there was a time when knowledge was the reason of confidence. I know it, that’s why I am confident. But today it is that because I can build it, I can have the confidence. So friends, we are moving from the age of knowledge to the age of cognition, from the age of knowing something to the age of creating something. That is where we are here to discuss that it’s not just the artificial intelligence that is going to take us forward, but it’s all our collective cognitive ability that’s going to keep the Vixit Bharat or the India or all of us to come back to India. because yes, we were this what we are today but to get you all back to all of us we have to stay the way we are and that is something that is going to get us in that let us keep this intact the layer intact, the context the culture, the creativity of ours with that friends, I would thank my panelists for being here with us because we have to take this stage forward and the floor will be open for all of us to discuss we also have a team to have a product demo and a few of our friends to join hands together in taking creativity and cognition way forward in education so I would thank all of you for being such great listeners and be here till the time we have the next part of this session coming forward with the product demo I would thank my panelists to be here and would request for a group picture with all of us can we have a group picture so friends this is an inaugural or unveiling of one of the products that we say may I request the team also to come forward the team here may we have Ajay Rivalia sir, Viplav sir Nandaji, Garima ma ‘am Mansi the moment Vijay sir may we please have you here yes we have discussed about creativity and cognition this is going to be the tomorrow and this is something that is going to keep all of us intact so we are the torch bearers to it and we present to you a product which is going to make it better and educate all of us for creativity and cognition may I request our mentor Unkar sir to please join us Thank you, friends.

And now put your hands together for a product demo, which the AI -led education platform brings to all of us.

Speaker 5

becomes unique, adaptive and future ready. Powered by machine learning, LLMs and agentic AI, the platform intelligently maps growth, interests and creative potential. The platform fosters mentorship, discovery and meaningful skill development. So, from recommended courses to resource hubs and spotlight mentors, ENCODE the creative learning network. Create, connect, collaborate. Shaping personalized journeys for the creators of tomorrow. In a rapidly evolving world, SHAPED by ai creativity cognition and collaboration are the new foundations of learning where the creative learning network meets the future of intelligent education so this is not a static learning it is dynamic responsive and continuously evolving with the learner at the ai impact summit bharat 2026 learners educators and innovators engage with encode’s live ecosystem they can explore domains interact with creative pathways and experience how technology and creativity converge so Design the world you want to grow in.

A philosophy that places creativity, exploration and individuality at the core of education. With an intuitive interface and curated experiences, ENCODE enables learners to discover, engage and progress. At their own pace.

Shweta Chaudhary

Thank you. Thank you. May I please request your statement for how do you take it forward along with your current work that you are doing how will you add it up as a creativity layer to the systems

Piyush Nangru

No, I think this is what would separate a graduate from a real world professional because we really need this layer beyond this, everything wrote, everything monotonous is going to be taken up you know one gentleman asked about the timeline, that question is pretty real and I think partnerships like these will really help us future proof our students, so really looking forward

Shweta Chaudhary

Thank you sir, we also have our team from MEC Connect from across the borders with us to join hands The Next Gen Academy Thank you

Speaker 6

So, we are proud to contribute to this academic industry partnership by bringing the design -oriented courses to our coding students. Our focus is that our students should not only learn the coding. They should have idea about the design thinking, digital thinking. They should apply all these stuffs in the product development. We want to make them entrepreneur. So, definitely this product going to help us a lot. Thank you.

Shweta Chaudhary

Thank you, sir. We have a strong, strong education partner with us called Nimbus. Yes. Learning is already there with the academic institution. So, I saw this entire presentation was really great. One of the thing which, you know, I saw here was with learning, the accessibility should be there. So, we have solved the problem of accessibility. But with CodeEDU, I guess collaboration, providing the next -gen courses, exploring or connecting or creating network with developers. Definitely help the students to be industry ready. And, you know, really do wonders into this area.

Speaker 1

Thank you sir, the make connect We are going to I am happy to see that the product, to be frank we are too excited as well and I as she said that we will be taking it abroad and see that we have a platform of students so we will be definitely taking it and joining hands with them on this Thank you. So thanking our partners may I request our partners Ajay Rivalia sir, Vipulov sir to please come forward for this for marking this milestone time that we have we have good education partners with us who plan to take us forward not just across the country but across the continent and make our intent stronger make our intent stronger within MOU that yes together we stand to make the education more meaningful for the Vixit Bharat to come so may we have a picture to document this All of us are overwhelmed to stand on a stage which the government has provided us with, so we want to be a part of this milestone.

Thank you. Thank you. Piyush sir, Gyan Prakash sir, please come on stage. Thank you. so with this I will conclude this session I hope everyone enjoyed this insightful and wonderful session and everyone agreed with this AI may automate ecosystem and system but creativity determines direction thank you so much everyone

Related ResourcesKnowledge base sources related to the discussion topics (32)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Benchmark Gen Street has a three‑decade history of digitising environment, health, safety and sustainability for roughly 450 global subscribers and eight million daily users.”

The knowledge base states that Benchmark Gen Street has been digitizing environment health and safety for 30 years, working with 450 global subscribers and 8 million users, confirming the report’s figures.

Confirmedhigh

“The company has identified about 75 distinct AI use cases and is re‑architecting its SaaS platform into an “AI‑first” solution.”

Both S1 and S2 note that Benchmark Gen Street has developed 75 AI use cases and is transforming its SaaS system to be AI‑first, corroborating the claim.

Additional Contextmedium

“The first AI agent demonstrated is an observation‑reporting tool (referred to as “Jenny AI”/“Genie AI”) that lets workers scan a QR code or upload a photo to auto‑populate an observation form.”

The knowledge base mentions an “observation reporting” program used for engaging people in reporting (see S32), confirming the existence of such a tool, though it does not reference the specific names Jenny AI or Genie AI.

Additional Contextmedium

“AI agents act as digital co‑workers, accelerating routine data capture but still requiring human validation, especially when broader context is missing.”

S46 and S25 discuss the “context gap” and the need for human oversight when AI agents lack sufficient information, providing additional nuance to the report’s statement about human validation.

Additional Contextlow

“Benchmark Gen Street is moving toward “agentify‑ing” many of its AI use cases so autonomous agents can perform heavy‑lifting previously done by humans.”

S112 describes autonomous AI agents as the next phase of enterprise automation, supporting the notion that Benchmark Gen Street is adopting agentic automation for complex tasks.

External Sources (125)
S1
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — <strong>Naveen GV:</strong> out a long, lengthy form of information for that to be processed much later by another human…
S2
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — Speakers:Ashish Gupta, Piyush Nangru Speakers:Audience (Saurav), Piyush Nangru, Speaker 4 Speakers:Naveen GV, Piyush N…
S3
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S4
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S5
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S6
Knowledge Café: Youth building the digital future – WSIS+20 Review and Beyond 2025 — – **Speaker 5** – Role/expertise not specified Speaker 5: Sure. So what we talked about as a group is we discussed this…
S9
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S10
S12
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned.
S13
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S14
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S15
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S16
S17
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — -Speaker 4: Role/title not mentioned (made a brief interjection during the session)
S18
Educating for Viksit Bharat_ Why Creativity Cognition & Culture Matter — Professor Ashish Gupta from South Asian University, established by SAARC nations, brought an international perspective f…
S20
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — Speakers:Ashish Gupta, Piyush Nangru Speakers:Audience member, Ashish Gupta Speakers:Naveen GV, Piyush Nangru, Speaker…
S21
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — -Chandan: Colleague of Naveen GV who was mentioned to take over the presentation but appears to be the same person refer…
S22
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — Speakers:Naveen GV, Piyush Nangru, Speaker 4, Ashish Gupta, Speaker 2, Shweta Chaudhary Speakers:Naveen GV, Speaker 1 …
S23
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S24
Agenda item 6 — – Providing ongoing training for CERT team members, keeping them informed of new threats and defensive tactics. – Streng…
S26
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S27
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S28
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S29
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — -Shweta Chaudhary: Dr. Shweta Chaudhary, founder and director of CodeEDU, host of the session, leader working at interse…
S30
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — Evidence:Conclusion drawn from the entire panel discussion and the launch of ENCODE platform, which focuses on creativit…
S31
Keynote by Naveen Tewari Founder &amp; CEO, inMobi India AI Impact Summit — This appears to be a keynote presentation rather than an interactive discussion, with Naveen Tewari as the sole substant…
S32
https://app.faicon.ai/ai-impact-summit-2026/ai-for-safer-workplaces-smarter-industries_-transforming-risk-into-real-time-intelligence — But now let’s look at what the transformative way of looking into these hazards and risks. So I will go online. What you…
S33
Ethics in the Age of AI — The need to preserve traditional forms of interaction and learning is also brought up. The analysis suggests that apps a…
S34
Challenging the status quo of AI security — Babak Hodjat: Thank you very much, Sounil. Yeah, we came out here for two reasons, as cognizant, one, to get people invo…
S35
WS #110 AI Innovation Responsible Development Ethical Imperatives — Daisy Selematsela: Thank you. I just want to highlight on issues faced by academic libraries when we look at the integra…
S36
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Basma Ammari: I mean, this was every time there’s a tech revolution, historically, we do see, you know, loss of jobs, …
S37
NRIs MAIN SESSION: DATA GOVERNANCE — Additionally, they advocate for public forums to provide opportunities for users to give feedback, thus enhancing data q…
S38
AI: The Great Equaliser? — Transparency and quality of information are essential
S39
Open Forum #8 AFRICAN UNION OPEN FORUM 2024 — Speaker 4: that. Yes. So I would like to start by thanking director UNU Macau. Definitely at the African Union we valu…
S40
Bridging the AI innovation gap — LJ Rich: to invite our opening keynote. It’s a pleasure to invite to the stage the director of the Telecommunications St…
S41
Comprehensive Discussion Report: The Future of Artificial General Intelligence — The session examined critical questions surrounding the timeline for achieving Artificial General Intelligence (AGI) and…
S42
“Re” Generative AI: Using Artificial and Human Intelligence in tandem for innovation — Audience:Atomic bombs. Yeah. Well, that one they asked pretty early. Yeah. What I’m saying is, I think that AI is like a…
S43
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — Absolutely. And if AI tools like Praman and Sabha Sar and, you know, Pancham can help that strengthen, what best, you kn…
S44
Science AI & Innovation_ India–Japan Collaboration Showcase — Yeah, I think I think sort of agree to what everybody has talked about. I think with AI and the smartphone and we are on…
S45
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Anastasiya Kozakova:Thank you very much. It’s a pleasure to be here. I represent the civil society organization. I work …
S46
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — The Context GapThe second constraint centres on the context gap, which Patel illustrated through a compelling medical an…
S47
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — This observation added practical complexity to the discussion and demonstrated how theoretical policy frameworks can hav…
S48
AI and human creativity: Who should hold the brush? — Economic structures that value human creativity:If AI can flood the market with ‘good enough’ content at minimal cost, w…
S49
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Shetty made a philosophical point about AI’s limitations, noting that AI is based on past inferences: “AI couldn’t have …
S50
Invest India Fireside Chat — Discussion point:Education and Future Learning Models
S51
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Amanda describes Microsoft’s ambitious scaling of their skills development program in India, doubling their original com…
S52
Tailored AI agents improve work output—at a social cost — AI agents cansignificantly improve workplace productivitywhen tailored to individual personality types, according to new…
S53
Agentic AI in Focus Opportunities Risks and Governance — “We want standards.”[2]. “So we’re talking about standards.”[4]. “We’re talking about technical benchmarks.”[31]. “Don’t…
S54
Press Conference: Closing the AI Access Gap — Moreover, the speakers argue that AI can drive productivity, creativity, and overall economic growth. It has the capacit…
S55
Enhancing rather than replacing humanity with AI — People’s judgment remains crucial, particularly for decisions that involve values, context, or individual circumstances.
S56
Educating for Viksit Bharat_ Why Creativity Cognition & Culture Matter — Human intelligence will remain superior to artificial intelligence because creativity, cognition, and culture are unique…
S57
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — Creativity, cognition, and culture are key pillars that define human beings and will remain crucial differentiators
S58
AI and human creativity: Who should hold the brush? — For many established artists, AI has also become a collaborator rather than a threat. It can generate early concepts to …
S59
AI and the moral compass: What we can do vs what we should do — If technology can perform both creative and physical labour, what remains distinctly human is not the task itself, but t…
S60
Open Forum: A Primer on AI — One significant argument put forward is that AI lacks true imaginative capabilities. While AI is a great mimic, it is no…
S61
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S62
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Anne Flanagan: Hello, apologies that I’m not there in person today. I’m in transit at the moment, hence my picture on yo…
S63
Building the AI-Ready Future From Infrastructure to Skills — Thomas describes a governance model for AI systems where autonomous AI agents can operate at machine speed but require h…
S64
Deepfakes and the AI scam wave eroding trust — Calls for regulation are understandable, but policy has inherent limitations in this space. Deepfakes evolve faster than…
S65
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Marlon Avalos: So, please. Thank you, Ida-san. This is an immersive experience. I just lost my connection, and this is a…
S66
WS #110 AI Innovation Responsible Development Ethical Imperatives — Ricardo Israel Robles Pelayo: Thank you very much. Good afternoon, everyone. It is an honor to be here and share a refle…
S67
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S68
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S69
AI for Social Empowerment_ Driving Change and Inclusion — This discussion focused on the impact of artificial intelligence on labor markets and employment, featuring perspectives…
S70
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — Mr. Sher Verick:Great. Well, thank you very much. It’s a real pleasure to be with you here today. I think Janine updated…
S71
How AI Is Transforming Indias Workforce for Global Competitivene — Impact:This grounded the discussion in practical reality, shifting focus from theoretical AI capabilities to actual ente…
S72
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Explanation:It was unexpected to see both regulatory leaders emphasizing that AI development should not be confined to I…
S73
Day 0 Event #251 Large Models and Small Player Leveraging AI in Small States and Startups — Karianne Tung: Good afternoon, everyone. It is a pleasure being here and to start this very interesting discussion on le…
S74
Driving Indias AI Future Growth Innovation and Impact — Explanation:The strong consensus between industry and government on prioritizing mass accessibility over premium service…
S75
The Role of Government and Innovators in Citizen-Centric AI — Lucilla emphasizes that having the technical components (models, computing capacity, datasets) is not sufficient – there…
S76
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S77
WS #283 AI Agents: Ensuring Responsible Deployment — Carter emphasizes that safeguarding agentic AI requires putting users in control through granular preferences about data…
S78
The Agent Universe From Automation to Autonomy — Summary:The main areas of disagreement center around workforce development approaches (formal training vs. self-directed…
S79
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — A bumblebee cannot fly but it still does. The thing is that when this statement was made in 1930, we understood very lit…
S80
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — Artificial intelligence | Social and economic development Benchmark Gen Street has been digitizing environment health a…
S81
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — out a long, lengthy form of information for that to be processed much later by another human in the loop, per se, to rea…
S82
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Thank you very much, Rebecca, and also very much appreciate Partnership on AI for the invitation. When this series of su…
S83
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Anastasiya Kozakova:Thank you very much. It’s a pleasure to be here. I represent the civil society organization. I work …
S84
Research shows AI complements, not replaces, human work — AI headlines often flip between hype and fear, but the truth is more nuanced. Much research is misrepresented, with task…
S85
As AI agents proliferate, human purpose is being reconsidered — As AI agentsrapidly evolvefrom tools to autonomous actors, experts are raising existential questions about human value a…
S86
WS #283 AI Agents: Ensuring Responsible Deployment — Wingfield challenged Carter’s framing of tasks like financial management as routine, arguing that “things like financial…
S87
Educating for Viksit Bharat_ Why Creativity Cognition & Culture Matter — Human intelligence will remain superior to artificial intelligence because creativity, cognition, and culture are unique…
S88
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Shetty made a philosophical point about AI’s limitations, noting that AI is based on past inferences: “AI couldn’t have …
S89
AI and human creativity: Who should hold the brush? — This simple statement, which circulated widely on social media recently, captures a profound anxiety rippling through th…
S90
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Discussion point:Ecosystem-wide skill requirements Discussion point:Educational program expansion
S91
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — This insight recognizes that AI education is happening organically through accessible tools rather than just formal educ…
S92
Invest India Fireside Chat — Discussion point:Education and Future Learning Models
S93
Tailored AI agents improve work output—at a social cost — AI agents cansignificantly improve workplace productivitywhen tailored to individual personality types, according to new…
S94
Agentic AI in Focus Opportunities Risks and Governance — Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin May…
S95
Opening keynote — Doreen Bogdan-Martin:Good morning, and welcome to the AI for Good Global Summit. Let me start by thanking our more than …
S96
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — Piotr Adamczewski:Thank you Martina, I totally agree that we have to discuss the problem of using AI, I have to also adm…
S97
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — The discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportuni…
S98
GermanAsian AI Partnerships Driving Talent Innovation the Future — Dr. Kofler acknowledges that people have legitimate fears about AI displacing jobs and emphasizes the importance of addr…
S99
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S100
Thinking through Augmentation — However, there is also discussion surrounding the risks and concerns associated with AI. Some believe that it could lead…
S101
Upholding Human Rights in the Digital Age: Fostering a Multistakeholder Approach for Safeguarding Human Dignity and Freedom for All — With the advent of artificial intelligence, jobs are changing, and there are concerns that labour protections are being …
S102
Elevating AI skills for all — The tone is consistently optimistic, enthusiastic, and collaborative throughout. The speaker maintains an upbeat, missio…
S103
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S104
“Re” Generative AI: Using Artificial and Human Intelligence in tandem for innovation — Furthermore, this approach echoes the ethos of SDG 17, Partnerships for the Goals, recognising that multifaceted collabo…
S105
Networking Session #60 Risk &amp; impact assessment of AI on human rights &amp; democracy — The tone was largely collaborative and optimistic, with speakers building on each other’s points and emphasizing the imp…
S106
AI (and) education: Convergences between Chinese and European pedagogical practices — The discussion maintained a collaborative and optimistic tone throughout, characterized by intellectual curiosity and co…
S107
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S108
AI for food systems — The tone throughout the discussion was consistently formal, optimistic, and collaborative. It maintained a ceremonial qu…
S109
AI Innovation in India — The tone was consistently celebratory, inspirational, and optimistic throughout the discussion. Speakers expressed pride…
S110
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S111
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S112
Autonomous AI agents are the next phase of enterprise automation — Organisations across sectors areturning to agentic automation—an emerging class of AI systems designed to think, plan, a…
S113
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S114
SAP unveils new models and tools shaping enterprise AI — The Germanmultinationalsoftware company, SAP,usedits TechEd event in Berlin to reveal a significant expansion of its Bus…
S115
Agentic Intelligence set to automate complex tasks with human oversight — Thomson Reuters hasunveiled a new AI platform, Agentic Intelligence, designed to automate complex workflows for professi…
S116
Living with the genie: Responsible use of genAI in content creation — Halima Ismail:Can I? Yeah. So we can solve this by the input. It’s based on the input. For example, if we are detecting …
S117
Protecting vulnerable groups online from harmful content – new (technical) approaches — The speaker, evidently in a coordinating role, commenced with vital updates for the attendees, underlining their intenti…
S118
Harnessing Collective AI for India’s Social and Economic Development — So, yeah, I think we’ve heard, I think, a little bit from some AI leaders about a next wave of AI that will be agentic, …
S119
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S120
AI redefines how cybersecurity teams detect and respond — AI, especially generative models, has becomea staple in cybersecurity operations, extending its role from traditional ma…
S121
Delegated decisions, amplified risks: Charting a secure future for agentic AI — Meredith Whittaker: Yeah. Well, I think governments and private citizens should be asking these questions. Do not feel l…
S122
Annex 5 — corrective and preventive action ( CAPA , also sometimes called corrective action/preventive action ) refers to the …
S123
ECOWAS Regional Critical Infrastructure Protection Policy — proposes a list of preventive, reactive and proactive measures that can be implemented;
S124
tABle of Contents — Part III makes recommendations to maximize the use of broadband to address national priorities. This includes reforming …
S125
Annex to the Government’s Proposal — – defining and planning the goals (according to their orientation, scope and time span); – supporting, forecasting and m…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
N
Naveen GV
1 argument144 words per minute564 words234 seconds
Argument 1
AI‑first SaaS transformation for EHS, with 75 use cases and autonomous agents (Naveen GV)
EXPLANATION
Naveen describes how Benchmark Gen Street is converting its long‑standing EHS SaaS platform into an AI‑first solution. Over the past three years the company has built around 75 AI use cases and is now focusing on ‘agentifying’ these capabilities to automate safety workflows.
EVIDENCE
He explains that the challenge over the last three years has been to transform a SaaS-based system into an AI-first product, noting the existence of about 75 different AI use cases and the move towards autonomous agents that deliver value for engagement [5-9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session overview describes Benchmark GenStreet’s shift to an AI-first platform serving 450 global subscribers and highlights the development of dozens of AI use cases, confirming the transformation and scale mentioned <a href="https://dig.watch/event/india-ai-impact-summit-2026/ai-for-safer-workplaces-smarter-industries-transforming-risk-into-real-time-intelligence/" target="_blank" class="diplo-source-cite" title="AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence" data-source-title="AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence" data-source-snippet="Naveen GV: out a long, lengthy form of information for that to be processed much later by another human in the loop, per se, to really looking at how do we get an experiential learnin”>[S1].
MAJOR DISCUSSION POINT
AI‑first SaaS transformation
AGREED WITH
Speaker 1, Speaker 4
DISAGREED WITH
Speaker 1, Speaker 4
S
Speaker 1
2 arguments146 words per minute3270 words1341 seconds
Argument 1
AI observation reporting that auto‑fills hazard forms from photos or voice input (Speaker 1)
EXPLANATION
Speaker 1 showcases the Observation Reporting program where workers can capture a photo or speak a description of a hazard, and an AI agent called Jenny AI analyses the input and automatically completes the safety form. This reduces manual data entry and speeds up reporting.
EVIDENCE
He demonstrates that a worker can scan a QR code or upload a photo, which is sent to the Jenny AI agent that analyses the image, identifies hazards such as missing fall-protection equipment, and fills the entire form on the user’s behalf; the same workflow is shown for Hindi voice input, where the AI transcribes and structures the report [22-26][30-36][46-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The demo of the Observation Reporting program shows workers uploading photos or speaking Hindi descriptions, with the Jenny AI agent analysing the input and completing the safety form automatically [S2][S32].
MAJOR DISCUSSION POINT
AI observation reporting
AGREED WITH
Speaker 4, Naveen GV
DISAGREED WITH
Naveen GV, Speaker 4
Argument 2
AI agents accelerate reporting but lack broader context, requiring human follow‑up questions (Speaker 1)
EXPLANATION
While the AI can auto‑populate most of the safety form, it cannot infer details not present in the image, such as the exact working height, and therefore asks the user follow‑up questions to obtain missing information.
EVIDENCE
He notes that the AI only sees the photo and therefore cannot determine specifics like the height at which workers are operating, prompting it to ask follow-up questions before finalising the report [39-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion notes explain that agents without full context may make incorrect guesses and therefore ask follow-up questions to obtain missing details such as working height [S25][S33].
MAJOR DISCUSSION POINT
Limitations of AI context
AGREED WITH
Speaker 4, Naveen GV
S
Speaker 3
1 argument145 words per minute649 words267 seconds
Argument 1
Uncertainty about when AI will surpass human intelligence; fear of job loss (Speaker 3)
EXPLANATION
Speaker 3 points out that many students and participants are unsure about the timeline for AI overtaking human capabilities, emphasizing that no predictive models exist and that the future impact remains unknown, which fuels anxiety about job displacement.
EVIDENCE
He observes that youngsters are unclear about when AI might overtake human intelligence, stating that there are no mathematical models to predict the timeline and that the future impact is uncertain [285-290].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Participants expressed anxiety about AI’s future impact and job displacement, noting the lack of predictive models for AI supremacy [S36][S2].
MAJOR DISCUSSION POINT
Timeline uncertainty for AI supremacy
DISAGREED WITH
Audience, Piyush Nangru, Shweta Chaudhary
S
Speaker 4
2 arguments152 words per minute1257 words494 seconds
Argument 1
Data quality and continuous human‑AI engagement are essential for reliable outcomes (Speaker 4)
EXPLANATION
Speaker 4 stresses that AI outputs are only as good as the data fed into them, warning that poor data leads to unreliable results and highlighting the need for ongoing human‑AI interaction to improve performance.
EVIDENCE
He explains that AI depends on the quality of data, noting that unreliable data produces unreliable results, and stresses the necessity of continuous engagement between humans and AI to enhance outcomes [209-216].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data-governance discussions underline that AI outputs depend on data quality and require ongoing human-AI interaction to improve reliability [S37][S38].
MAJOR DISCUSSION POINT
Importance of data quality
AGREED WITH
Ashish Gupta, Shweta Chaudhary
DISAGREED WITH
Naveen GV, Speaker 1
Argument 2
Government digital‑skilling initiatives and marketplaces aim to bring AI tools to all citizens, bridging urban‑rural gaps (Speaker 4)
EXPLANATION
Speaker 4 describes a government‑run marketplace where any GST‑registered vendor can register, have their products verified, and participate in procurement, and also mentions digital‑skilling portals that provide free AI‑readiness training, illustrating efforts to extend AI access nationwide.
EVIDENCE
He outlines a platform where GST-registered vendors can onboard, undergo verification, and be part of a procurement marketplace, and cites government digital-skilling portals offering free AI readiness training to citizens, demonstrating a strategy to reach both urban and rural users [379-383][332-337].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Government-run marketplaces for GST-registered vendors and free AI-readiness training portals are cited as efforts to extend AI access nationwide, especially to rural areas [S43][S44].
MAJOR DISCUSSION POINT
Digital‑skilling and AI marketplace
AGREED WITH
Ashish Gupta, Piyush Nangru, Shweta Chaudhary, Audience
A
Audience
1 argument148 words per minute658 words265 seconds
Argument 1
Audience query on timeline for AI overtaking human intelligence (Audience)
EXPLANATION
An audience member asks whether there is a specific timeline after which AI will surpass human intelligence, seeking a bound on when AI might become superior.
EVIDENCE
The audience asks, “you mean to say there will be a timeline where this human intelligence will cease to supersede… as AI improves is there a timeline…?” [261].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The comprehensive discussion on AGI explicitly addresses questions about when AI might surpass human intelligence, providing context for the audience’s timeline query [S41][S2].
MAJOR DISCUSSION POINT
Audience question on AI timeline
AGREED WITH
Ashish Gupta, Piyush Nangru, Speaker 4, Shweta Chaudhary
DISAGREED WITH
Speaker 3, Piyush Nangru, Shweta Chaudhary
S
Speaker 2
1 argument111 words per minute357 words191 seconds
Argument 1
Creativity is the one skill AI cannot originate; it will remain the decisive human advantage (Speaker 2)
EXPLANATION
Speaker 2 argues that while AI can generate content, it cannot originate lived experiences or true creativity, positioning creativity as the enduring human strength that AI cannot replace.
EVIDENCE
He states that AI can generate but cannot originate lived experiences, emphasizing that creativity is the decisive human advantage and that good design is about solutions rather than mere drawings [149-155].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Commentary on creativity emphasizes that AI can generate but not originate lived experiences, positioning creativity as a uniquely human strength [S18].
MAJOR DISCUSSION POINT
Creativity as uniquely human
AGREED WITH
Naveen GV, Shweta Chaudhary, Piyush Nangru, Ashish Gupta, Speaker 4
DISAGREED WITH
Piyush Nangru, Ashish Gupta
P
Piyush Nangru
2 arguments150 words per minute995 words396 seconds
Argument 1
Creativity, cognition, and culture are the three pillars that define human capital and future progress (Piyush Nangru)
EXPLANATION
Piyush identifies creativity, cognition and culture as the three fundamental pillars of human capital, explaining that while coding is now a baseline skill, true value lies in applying creativity, and that cultural diversity enriches cognition.
EVIDENCE
He says these three pillars define any human being, notes that coding is now table-stakes and that the application of creativity matters, and highlights the importance of cultural heritage and multilingualism in shaping cognition [185-190][191-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same source outlines creativity, cognition and culture as the three fundamental pillars of human capital and future development [S18].
MAJOR DISCUSSION POINT
Pillars of human capital
AGREED WITH
Naveen GV, Speaker 2, Shweta Chaudhary, Ashish Gupta, Speaker 4
DISAGREED WITH
Speaker 2, Ashish Gupta
Argument 2
AI acts as a democratizing tool, enabling self‑learning and solopreneurship for rural and economically‑backward communities (Piyush Nangru)
EXPLANATION
Piyush claims that AI serves as a powerful democratizing force, allowing individuals in tier‑3 towns and rural areas to self‑motivate, learn, and launch solopreneur ventures such as building websites, creating content, and marketing independently.
EVIDENCE
He describes AI as a democratizing tool that enables self-motivation and solopreneurship in rural and economically-backward communities, allowing people to create websites, generate creative content, and market themselves without extensive resources [384-388].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of AI-driven platforms that support rural entrepreneurship, self-learning and content creation illustrate AI’s democratizing role [S43][S44].
MAJOR DISCUSSION POINT
AI democratization for underserved regions
AGREED WITH
Ashish Gupta, Speaker 4, Shweta Chaudhary, Audience
DISAGREED WITH
Speaker 3, Audience, Shweta Chaudhary
S
Shweta Chaudhary
1 argument156 words per minute1664 words638 seconds
Argument 1
Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary)
EXPLANATION
Shweta emphasizes that human intelligence will remain superior to AI and calls for the preservation of originality, creativity, and cultural identity as essential human traits in the AI era.
EVIDENCE
She thanks Umang for setting the stage, asks why human intelligence will stay in the age of AI, and stresses that originality and humanness must be kept intact, asserting that human intelligence will continue to supersede AI [173-176][201-204].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Session remarks stress that human intelligence will remain superior to AI and call for preserving originality and cultural identity <a href="https://dig.watch/event/india-ai-impact-summit-2026/ai-for-safer-workplaces-smarter-industries-transforming-risk-into-real-time-intelligence/" target="_blank" class="diplo-source-cite" title="AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence" data-source-title="AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence" data-source-snippet="Naveen GV: out a long, lengthy form of information for that to be processed much later by another human in the loop, per se, to really looking at how do we get an experiential learnin”>[S1][S33].
MAJOR DISCUSSION POINT
Human intelligence vs AI
AGREED WITH
Ashish Gupta, Speaker 4
DISAGREED WITH
Speaker 2, Ashish Gupta
A
Ashish Gupta
1 argument153 words per minute1522 words595 seconds
Argument 1
Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta)
EXPLANATION
Ashish outlines a transition from pure knowledge acquisition to applied intelligence, urging that AI be used ethically and responsibly in education, and highlighting the role of large language models in supporting learning while maintaining ethical standards.
EVIDENCE
He discusses the new ‘orange economy’, the shift from knowledge to applied intelligence, and stresses the importance of ethical and responsible AI use in learning, providing examples of AI-assisted education and the need for responsible deployment [224-230][301-339].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Issues raised by academic libraries highlight the need for responsible and ethical integration of AI in education and learning environments [S35].
MAJOR DISCUSSION POINT
Applied intelligence and AI ethics in education
AGREED WITH
Speaker 4, Shweta Chaudhary
DISAGREED WITH
Speaker 2, Piyush Nangru
S
Speaker 5
1 argument69 words per minute176 words152 seconds
Argument 1
ENCODE platform provides personalized, AI‑driven creative learning pathways and mentorship (Speaker 5)
EXPLANATION
Speaker 5 introduces ENCODE, a platform powered by machine learning, large language models and agentic AI that maps learners’ growth, interests and creative potential, offering mentorship, curated resources and personalized learning journeys.
EVIDENCE
He describes ENCODE as powered by ML, LLMs and agentic AI, mapping growth and creative potential, fostering mentorship, discovery and skill development through personalized courses and resource hubs [446-454].
MAJOR DISCUSSION POINT
AI‑driven creative learning platform
S
Speaker 6
1 argument162 words per minute71 words26 seconds
Argument 1
Design‑oriented courses integrate AI with entrepreneurship training to produce industry‑ready graduates (Speaker 6)
EXPLANATION
Speaker 6 explains that their institution combines design thinking with coding education, teaching students design and digital thinking so they can apply these skills in product development and become entrepreneurs ready for industry.
EVIDENCE
He states that the focus is on teaching students not only coding but also design thinking and digital thinking, enabling them to apply these skills in product development and entrepreneurship, thereby making them industry-ready [459-465].
MAJOR DISCUSSION POINT
Design‑oriented AI education
Agreements
Agreement Points
AI is a powerful enabler but human creativity, cognition and culture remain essential and must guide AI outcomes
Speakers: Naveen GV, Speaker 2, Shweta Chaudhary, Piyush Nangru, Ashish Gupta, Speaker 4
AI‑first SaaS transformation for EHS, with 75 use cases and autonomous agents (Naveen GV) Creativity is the one skill AI cannot originate; it will remain the decisive human advantage (Speaker 2) Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary) Creativity, cognition, and culture are the three pillars that define human capital and future progress (Piyush Nangru) Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta) Data quality and continuous human‑AI engagement are essential for reliable outcomes (Speaker 4)
All these speakers agree that AI should be viewed as a tool that amplifies safety, education and business processes, but the ultimate direction, quality and ethical use depend on uniquely human traits such as creativity, cognition, culture and continuous human oversight [5-9][149-155][173-176][201-204][185-190][191-199][301-307][209-216].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with human-centric AI principles that stress creativity, cognition and culture as uniquely human pillars that must steer AI development, as highlighted in expert commentaries on the need for human judgment and the limits of machine imagination [S55][S56][S57][S60].
AI can democratize access to services and bridge urban‑rural digital divides
Speakers: Speaker 4, Piyush Nangru, Speaker 1
Government digital‑skilling initiatives and marketplaces aim to bring AI tools to all citizens, bridging urban‑rural gaps (Speaker 4) AI acts as a democratizing tool, enabling self‑learning and solopreneurship for rural and economically‑backward communities (Piyush Nangru) AI observation reporting that auto‑fills hazard forms from photos or voice input (Speaker 1)
The government representative highlights nationwide AI-readiness portals and vendor marketplaces, the private sector speaker stresses AI’s role in empowering rural entrepreneurs, and the product demo shows multilingual, QR-code based reporting that works for non-English speakers, together signalling a shared belief that AI can be made widely accessible [379-383][332-337][384-388][46-58].
Building digital skills and capacity is essential for effective AI adoption
Speakers: Ashish Gupta, Piyush Nangru, Speaker 4, Shweta Chaudhary, Audience
Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta) AI acts as a democratizing tool, enabling self‑learning and solopreneurship for rural and economically‑backward communities (Piyush Nangru) Government digital‑skilling initiatives and marketplaces aim to bring AI tools to all citizens, bridging urban‑rural gaps (Speaker 4) Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary) Audience query on timeline for AI overtaking human intelligence (Audience)
Multiple participants stress that continuous education, ethical training and government-backed skilling programmes are required so that workers, students and citizens can harness AI responsibly and remain competitive [301-307][338-339][332-337][173-176][261].
POLICY CONTEXT (KNOWLEDGE BASE)
Skill-building is emphasized in workforce transformation research and policy recommendations that call for AI-ready societies, underscoring that human capabilities remain critical for successful AI uptake [S71][S75][S78].
AI systems have inherent limitations and require human validation and quality data
Speakers: Speaker 1, Speaker 4, Naveen GV
AI observation reporting that auto‑fills hazard forms from photos or voice input (Speaker 1) AI agents accelerate reporting but lack broader context, requiring human follow‑up questions (Speaker 1) Data quality and continuous human‑AI engagement are essential for reliable outcomes (Speaker 4) AI‑first SaaS transformation for EHS, with 75 use cases and autonomous agents (Naveen GV)
The demo acknowledges that AI can auto-populate forms but cannot infer missing details such as exact working height, prompting follow-up queries; this mirrors the broader point that AI outputs depend on data quality and must be overseen by humans [39-43][209-216][5-9].
POLICY CONTEXT (KNOWLEDGE BASE)
Governance models that mandate human validation of AI outputs and high-quality data are advocated in responsible AI frameworks, highlighting the need for oversight to mitigate AI’s intrinsic constraints [S55][S63][S76][S77].
Ethical and responsible use of AI is a shared priority
Speakers: Ashish Gupta, Speaker 4, Shweta Chaudhary
Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta) Data quality and continuous human‑AI engagement are essential for reliable outcomes (Speaker 4) Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary)
All three stress that AI must be deployed with ethical safeguards, high-quality data and a focus on preserving uniquely human values, underscoring a common normative stance [301-307][338-339][209-216][173-176].
POLICY CONTEXT (KNOWLEDGE BASE)
This priority mirrors international ethical AI commitments and policy toolkits that stress responsible development, transparency and accountability as core principles [S55][S66][S67][S69].
Similar Viewpoints
Both argue that creativity (and the broader trio of cognition and culture) is the core human strength that AI cannot replace, positioning it as the decisive competitive advantage [149-155][185-190][191-199].
Speakers: Speaker 2, Piyush Nangru
Creativity is the one skill AI cannot originate; it will remain the decisive human advantage (Speaker 2) Creativity, cognition, and culture are the three pillars that define human capital and future progress (Piyush Nangru)
Both see AI as a democratizing force that can close the urban‑rural divide and empower underserved populations through skill‑building and entrepreneurship [379-383][332-337][384-388].
Speakers: Speaker 4, Piyush Nangru
Government digital‑skilling initiatives and marketplaces aim to bring AI tools to all citizens, bridging urban‑rural gaps (Speaker 4) AI acts as a democratizing tool, enabling self‑learning and solopreneurship for rural and economically‑backward communities (Piyush Nangru)
Both stress that AI deployment must be coupled with high‑quality data, ethical guidelines and continuous human involvement to ensure trustworthy outcomes [301-307][338-339][209-216].
Speakers: Ashish Gupta, Speaker 4
Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta) Data quality and continuous human‑AI engagement are essential for reliable outcomes (Speaker 4)
Unexpected Consensus
Business leader and public‑sector representative both prioritize AI‑driven democratization of services
Speakers: Naveen GV, Speaker 4
AI‑first SaaS transformation for EHS, with 75 use cases and autonomous agents (Naveen GV) Government digital‑skilling initiatives and marketplaces aim to bring AI tools to all citizens, bridging urban‑rural gaps (Speaker 4)
While Naveen speaks from a private-sector, profit-driven safety platform perspective, and Speaker 4 from a government policy angle, both converge on the belief that AI should be scaled to reach all users, including remote and underserved groups – a convergence not explicitly anticipated given their differing organisational motives [5-9][379-383][332-337].
Overall Assessment

The discussion shows a strong, cross‑sectoral consensus that AI is a transformative enabler but must be paired with human creativity, high‑quality data, ethical safeguards and widespread digital skills. Participants uniformly endorse capacity‑building, inclusive access and continuous human oversight as prerequisites for responsible AI deployment.

High consensus – the shared viewpoints cut across business, government, academia and civil society, indicating that future policies should focus on education, data governance, ethical frameworks and inclusive infrastructure to realise AI’s benefits while preserving human agency.

Differences
Different Viewpoints
Timeline and eventual supremacy of AI over human intelligence
Speakers: Speaker 3, Audience, Piyush Nangru, Shweta Chaudhary
Uncertainty about when AI will surpass human intelligence; fear of job loss (Speaker 3) Audience query on timeline for AI overtaking human intelligence (Audience) AI acts as a democratizing tool, enabling self‑learning and solopreneurship for rural and economically‑backward communities (Piyush Nangru) Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary)
Speaker 3 says there is no predictive model for when AI will outstrip humans, expressing uncertainty [285-290]; the audience explicitly asks whether such a timeline exists [261]; Piyush acknowledges a timeline may be now but says it is not easy to answer [278-279]; Shweta counters that human intelligence will remain superior, implying AI will not overtake [173-176][201-204].
Extent of AI autonomy versus need for human oversight and data quality
Speakers: Naveen GV, Speaker 1, Speaker 4
AI‑first SaaS transformation for EHS, with 75 use cases and autonomous agents (Naveen GV) AI observation reporting that auto‑fills hazard forms from photos or voice input (Speaker 1) Data quality and continuous human‑AI engagement are essential for reliable outcomes (Speaker 4)
Naveen pushes for a platform-wide AI-first transformation with autonomous agents handling safety workflows [5-9]; Speaker 1 demonstrates an AI observation-reporting tool that can auto-populate forms but admits it lacks broader context and must ask follow-up questions for missing details [39-43]; Speaker 4 warns that AI outputs depend on the quality of data and require ongoing human-AI interaction to stay reliable [209-216].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between autonomous AI agents and mandatory human oversight is a recurring theme in AI governance roadmaps and safety recommendations, which call for human validation before AI-driven changes are enacted [S63][S77][S76][S55].
Impact of AI on employment and the relevance of human skills
Speakers: Speaker 2, Shweta Chaudhary, Ashish Gupta
Creativity is the one skill AI cannot originate; it will remain the decisive human advantage (Speaker 2) Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary) Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta)
Speaker 2 predicts that resumes will become obsolete by 2030 because AI will do everything faster, better and cheaper [158-163]; Shweta argues that human intelligence will stay superior and originality must be kept intact, suggesting AI will not replace humans [173-176][201-204]; Ashish stresses that AI should be used ethically as an assistive tool while human decision-making remains central [301-339].
POLICY CONTEXT (KNOWLEDGE BASE)
Research from the ILO and other labor studies highlights AI’s mixed effects on jobs and stresses that human skills remain essential, providing a historical backdrop to current debates on workforce relevance [S69][S70][S71][S78].
Whether AI can generate or support creativity versus it being uniquely human
Speakers: Speaker 2, Piyush Nangru, Ashish Gupta
Creativity is the one skill AI cannot originate; it will remain the decisive human advantage (Speaker 2) Creativity, cognition, and culture are the three pillars that define human capital and future progress (Piyush Nangru) Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta)
Speaker 2 claims AI can generate but cannot originate lived experiences, positioning creativity as uniquely human [149-155]; Piyush highlights creativity, cognition and culture as essential pillars while also stating that AI democratizes learning and can help people create, implying AI can support creative processes [185-190][191-199]; Ashish describes a shift to applied intelligence where AI assists but human creativity still drives solutions [224-230][301-339].
POLICY CONTEXT (KNOWLEDGE BASE)
Scholarly discussions differentiate AI-assisted creativity from true human imagination, noting AI’s role as a collaborator but not a substitute for uniquely human creative insight [S56][S57][S58][S60].
Unexpected Differences
Resumes will die vs human intelligence remains superior
Speakers: Speaker 2, Shweta Chaudhary
Creativity is the one skill AI cannot originate; it will remain the decisive human advantage (Speaker 2) Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary)
Speaker 2 makes a bold prediction that resumes will become obsolete by 2030 because AI will replace most skills [158-163], whereas Shweta asserts that human intelligence will continue to outrank AI and that originality must be kept intact, implying that such a collapse of human-based resumes is unlikely [173-176][201-204].
Full AI autonomy vs need for continuous human data oversight
Speakers: Naveen GV, Speaker 4
AI‑first SaaS transformation for EHS, with 75 use cases and autonomous agents (Naveen GV) Data quality and continuous human‑AI engagement are essential for reliable outcomes (Speaker 4)
Naveen promotes a vision of a completely autonomous, AI-first safety platform [5-9], while Speaker 4 cautions that AI results are only as good as the data fed into them and that ongoing human-AI interaction is required to maintain reliability [209-216]. The contrast between a fully autonomous system and a data-quality-driven, human-in-the-loop approach was not anticipated given their shared focus on safety AI.
POLICY CONTEXT (KNOWLEDGE BASE)
Responsible AI deployment guidelines repeatedly call for continuous human oversight of data and model behavior, even in highly autonomous systems, to ensure accountability and prevent unintended outcomes [S63][S77][S55].
Uncertainty about AI timeline vs claim of immediate relevance
Speakers: Speaker 3, Piyush Nangru
Uncertainty about when AI will surpass human intelligence; fear of job loss (Speaker 3) AI acts as a democratizing tool, enabling self‑learning and solopreneurship for rural and economically‑backward communities (Piyush Nangru)
Speaker 3 stresses that there is no model to predict when AI will overtake humans, highlighting uncertainty [285-290]; Piyush, however, suggests that AI is already a democratizing force with immediate impact, stating that the timeline is “now” though not easy to answer [278-279]. The clash between a stance of uncertainty and a claim of present-day relevance was unexpected.
Overall Assessment

The discussion revealed several substantive disagreements: (1) the timing and possibility of AI surpassing human intelligence, (2) the degree of autonomy appropriate for AI systems versus the necessity of human oversight and data quality, (3) the magnitude of AI’s impact on employment and whether human skills will become obsolete, and (4) whether AI can ever generate genuine creativity. While participants shared a common optimism about AI’s potential, they diverged sharply on its future trajectory and the safeguards required.

Moderate to high. The disagreements span technical implementation (autonomy vs data quality), socio‑economic forecasts (job displacement, resume relevance), and philosophical views on creativity. These divergences suggest that consensus on policy, governance, and investment priorities will require careful negotiation, especially in areas of AI governance, capacity building, and safeguarding human rights.

Partial Agreements
Both aim to improve safety reporting through AI; Naveen focuses on a platform‑wide AI‑first transformation with many use cases [5-9], while Speaker 1 demonstrates a specific observation‑reporting tool that auto‑fills forms from photos or voice [22-26][30-36][46-58].
Speakers: Naveen GV, Speaker 1
AI‑first SaaS transformation for EHS, with 75 use cases and autonomous agents (Naveen GV) AI observation reporting that auto‑fills hazard forms from photos or voice input (Speaker 1)
Both seek universal AI access; Speaker 4 describes a GST‑based government marketplace and free AI‑readiness training portals to reach all citizens [379-383][332-337], whereas Piyush emphasizes AI as a tool that enables individuals in tier‑3 towns to learn and launch solopreneur ventures [384-388].
Speakers: Speaker 4, Piyush Nangru
Government digital‑skilling initiatives and marketplaces aim to bring AI tools to all citizens, bridging urban‑rural gaps (Speaker 4) AI acts as a democratizing tool, enabling self‑learning and solopreneurship for rural and economically‑backward communities (Piyush Nangru)
Both stress safeguarding human values in AI adoption; Ashish focuses on ethical, responsible AI use and a shift to applied intelligence in education [224-230][301-339], while Shweta emphasizes preserving originality, creativity and cultural identity as AI becomes pervasive [173-176][201-204].
Speakers: Ashish Gupta, Shweta Chaudhary
Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta) Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary)
Takeaways
Key takeaways
Benchmark Gen Street is transitioning its 30‑year EHS SaaS platform to an AI‑first solution, with ~75 use cases and autonomous agents that can auto‑fill hazard reports from photos or voice. AI agents act as digital co‑workers: they accelerate data capture and analysis but still require human validation and contextual follow‑up. Human strengths—creativity, cognition, and culture—are viewed as the enduring advantage over AI, especially for problem‑solving and design. Education must shift from pure knowledge acquisition to applied intelligence, ethical AI use, and creativity‑driven learning; platforms like ENCODE aim to deliver personalized, AI‑guided learning pathways. AI is positioned as a democratizing tool that can empower rural, economically‑backward, and under‑skilled populations through self‑learning, solopreneurship, and government digital‑skilling initiatives. There is widespread uncertainty and fear about AI surpassing human intelligence and its impact on jobs, prompting calls for continuous human‑AI collaboration and responsible governance.
Resolutions and action items
Benchmark Gen Street will prioritize development of autonomous AI agents to make the platform fully agentic within the next year. The presenters invited attendees to visit their booth for personalized discussions on AI implementation. Partnerships were announced between the AI safety platform, educational entities (e.g., ENCODE, Nimbus, MEC Connect) and government initiatives to integrate AI‑driven learning and skilling programs. Commitment to continue ethical, responsible AI training for educators and students, leveraging existing government digital‑skilling portals. Plans to conduct further product demos and formalize MOUs with academic and industry partners.
Unresolved issues
Exact timeline when AI might surpass human intelligence and the implications for employment. How to scale AI literacy and training to the entire Indian population (≈140 crore people), especially those without internet access or formal education. Mechanisms to ensure data quality and continuous human‑AI feedback loops for reliable safety predictions. Specific curriculum changes needed in schools and universities to embed AI, creativity, and ethics effectively. Details on how tax remittance for Indian expatriates should be structured to support the national economy (audience query). Methods to systematically build confidence in students from under‑privileged backgrounds.
Suggested compromises
Position AI as an augmenting tool rather than a replacement, maintaining human oversight for context and ethical decisions. Combine AI‑driven automation with human‑led validation (e.g., follow‑up questions in hazard reporting, 5‑Why analysis). Promote a balanced narrative that acknowledges AI’s efficiency while emphasizing the irreplaceable value of human creativity, cognition, and cultural insight. Encourage collaborative learning environments where AI provides personalized guidance, but educators focus on fostering creation and application skills. Adopt a phased rollout of AI education—starting with foundational exposure in schools, followed by deeper integration in higher education and vocational training.
Thought Provoking Comments
“Resumes will die by 2030. The only skill that will remain extremely important is design and creativity. The workforce of the future must be able to collaborate with machines, not compete with them, and continuously adapt without fear.”
This bold prediction directly challenges the conventional belief that existing professional credentials will stay relevant, shifting the focus from technical skills to uniquely human creative abilities.
It pivoted the conversation from a product‑centric demo to a broader societal debate about the future of work. Subsequent speakers (e.g., Piyush Nangru, Ashish Gupta, and the audience) expanded on the idea, discussing skill shelf‑life, the need for continuous learning, and the role of creativity as a differentiator.
Speaker: Speaker 2 (the bumblebee metaphor speaker)
“We can scan a QR code or upload a photo, and the AI agent (Jenny AI) automatically fills the safety observation form, even asking follow‑up questions when context is missing.”
Demonstrates a concrete, low‑friction workflow that removes manual form‑filling, especially for non‑technical or non‑English‑speaking workers, illustrating AI’s potential for inclusive safety reporting.
Set the technical foundation for the rest of the discussion, leading participants to explore multilingual voice input, ergonomic risk detection, and autonomous compliance – each building on this initial use‑case.
Speaker: Speaker 1 (demonstrator of the safety platform)
“The AI can listen to a worker’s description in Hindi, transcribe it, and populate the structured safety form without the worker needing to know the corporate terminology.”
Highlights AI’s ability to bridge language and literacy gaps, expanding accessibility beyond the demo’s visual input scenario.
Prompted the audience to consider broader inclusion challenges and inspired later remarks about AI democratizing education and training for rural or under‑served populations.
Speaker: Speaker 1
“RISC‑AI processes every record across programs to surface patterns, precursors and heat‑maps, enabling predictive insight into emerging risks.”
Introduces a macro‑level analytical layer that moves from isolated incident reporting to enterprise‑wide risk intelligence, adding strategic depth to the conversation.
Shifted the dialogue from operational automation to strategic foresight, leading participants to discuss how AI can inform preventive actions and policy decisions.
Speaker: Naveen GV
“Creativity, cognition and culture are the three pillars that define human capital; coding is now table‑stakes, what matters is how we apply it.”
Frames the debate in terms of enduring human attributes rather than specific technologies, reinforcing the earlier bumblebee claim while adding cultural nuance.
Reinforced the panel’s consensus that AI will not replace humans but will amplify these three pillars, prompting further discussion on multilingual contexts and regional diversity.
Speaker: Piyush Nangru
“AI is only as good as the data fed into it; poor data yields unreliable results. Continuous engagement is required, unlike a one‑off software install.”
Challenges the simplistic view of AI as a plug‑and‑play solution, emphasizing data quality, governance, and ongoing human oversight.
Tempered the earlier enthusiasm, leading to a more balanced view that highlighted the need for ethical frameworks and human‑in‑the‑loop governance.
Speaker: Speaker 4 (public administration perspective)
“The shelf‑life of hard skills is shrinking from decades to a few years; we must move from ‘learning’ to ‘making’ and applying knowledge.”
Provides a concrete metric that underscores the urgency of re‑thinking education and workforce development in the AI era.
Steered the conversation toward actionable educational reforms, prompting Ashish Gupta and others to discuss project‑based learning, AI‑enabled personalized assessment, and the need for rapid up‑skilling.
Speaker: Piyush Nangru (response to audience timeline question)
“AI can analyze a video of a manual material handling task and automatically flag ergonomic risks that only a certified ergonomist could detect.”
Extends the AI use‑case from safety compliance to health ergonomics, showing cross‑domain applicability and the potential to replace scarce specialist expertise.
Opened a new thread about AI augmenting specialist roles, leading to discussion on democratizing expert knowledge in remote or underserved sites.
Speaker: Speaker 1
“Education must shift from knowledge acquisition to cognition – the ability to create, apply, and solve problems – and AI should be the tool that enables this shift.”
Synthesizes the multiple strands of the discussion into a clear pedagogical vision, linking AI, creativity, and the future of learning.
Served as a concluding turning point that unified the technical demos, philosophical debates, and policy concerns into a single actionable narrative for the audience.
Speaker: Shweta Chaudhary (closing remarks)
“The government’s digital‑skilling portals and AI readiness programs are essential, but schools still lack the infrastructure (labs, AI curriculum) to make AI education effective.”
Brings a policy‑level perspective, identifying systemic gaps that could hinder the optimistic scenarios presented earlier.
Prompted a realistic discussion about implementation challenges, leading to suggestions about public‑private partnerships and the need for AI labs in schools.
Speaker: Ashish Gupta
Overall Assessment

The discussion began with a concrete product demonstration that showcased AI‑driven safety reporting. Early technical insights (photo/voice input, multilingual support) established a foundation for broader speculation. A pivotal moment arrived when Speaker 2 declared that resumes would become obsolete and that creativity would be the sole enduring skill, which reframed the dialogue from operational efficiency to existential questions about work, education, and human identity. Subsequent comments from Piyush, Ashish, and the public‑administration voice deepened this shift, introducing cultural, ethical, and policy dimensions. The introduction of RISC‑AI and the ergonomic video analysis expanded the scope from individual incidents to enterprise‑wide risk intelligence, while the audience’s timeline question forced the panel to confront the rapid erosion of hard‑skill relevance. Throughout, each thought‑provoking remark either opened a new thematic avenue (e.g., data quality, democratization of expertise, education reform) or reinforced the emerging consensus that AI will augment—not replace—human creativity, cognition, and culture. The final synthesis by Shweta Chaudhary tied these threads together, steering the conversation toward actionable educational and policy strategies. In sum, the identified comments acted as catalysts that repeatedly redirected the conversation, deepened its analytical layers, and ultimately shaped a narrative that balances AI’s transformative potential with the irreplaceable value of human ingenuity.

Follow-up Questions
When will AI surpass human intelligence? Is there a timeline for AI becoming better than humans?
Understanding the timeline helps stakeholders plan for workforce transitions, policy making, and educational curriculum adjustments.
Speaker: Audience (unidentified participant)
How can we effectively train the entire Indian population (approximately 140 crore people), including parents, young children, and non‑tech‑savvy individuals, to use AI responsibly?
Massive AI literacy is essential to avoid digital divide, ensure equitable access, and prevent misuse or mistrust of AI technologies.
Speaker: Audience (unidentified participant)
What systematic approach is needed to ensure government AI initiatives (e.g., the GEM marketplace) reach, are trusted by, and benefit under‑served and rural communities?
Effective rollout and trust-building are critical for inclusive adoption of AI services across diverse socio‑economic groups.
Speaker: Audience (unidentified participant)
How should AI education be integrated into school curricula, including the required infrastructure (AI labs) and teacher training?
Early AI education builds foundational skills, prepares future talent, and ensures responsible use from a young age.
Speaker: Ashish Gupta
What frameworks and guidelines are needed to ensure ethical and responsible use of AI, especially concerning privacy and generated content?
Ethical safeguards protect individuals’ rights, maintain public trust, and comply with emerging regulations.
Speaker: Ashish Gupta
How can we address widespread fear and anxiety about AI while preserving human originality and unique qualities?
Mitigating fear is necessary for smoother adoption and for leveraging human creativity as a competitive advantage.
Speaker: Speaker 4 (public administration representative)
In what ways can AI be used to quickly assess individual skill gaps and match unemployed or under‑employed populations with appropriate jobs?
Targeted AI‑driven skill mapping can reduce unemployment, improve social equity, and support economic growth.
Speaker: Speaker 3 (public policy perspective)
How does the current education system impact confidence levels of students from non‑urban or under‑privileged backgrounds, and how can AI‑enabled learning improve it?
Confidence influences learning outcomes; understanding AI’s role can help design interventions that boost self‑efficacy.
Speaker: Audience (unidentified participant)
How can the accuracy and contextual awareness of AI agents like ‘Jenny AI’ be improved when analyzing images that lack full situational information?
Better context handling reduces false positives/negatives, increasing trust and effectiveness of AI‑assisted safety reporting.
Speaker: Speaker 1 (demo presenter)
What are the best practices for implementing robust multilingual support (e.g., Hindi) in AI‑driven observation reporting tools?
Multilingual capability expands accessibility for diverse workforces, ensuring inclusive safety reporting.
Speaker: Speaker 1
How can AI‑generated corrective and preventive actions be aligned with the established hierarchy of controls and regulatory compliance frameworks?
Alignment ensures that AI recommendations are legally sound, practically feasible, and prioritize safety effectively.
Speaker: Speaker 1
What strategies are needed to scale ergonomics analysis (Ergo AI) across varied industrial settings and different types of manual tasks?
Scalable ergonomics AI can reduce musculoskeletal injuries industry‑wide, improving worker health and productivity.
Speaker: Speaker 1
How can risk‑trend visualization and predictive modeling (RISC‑AI) be validated and refined to provide reliable early‑warning signals?
Validated predictive intelligence enables proactive risk mitigation, potentially averting large‑scale incidents.
Speaker: Speaker 1
In what ways can AI act as a democratizing tool to empower rural entrepreneurs and solopreneurs in tier‑3 towns and villages?
Empowering rural innovators can bridge economic gaps, foster local entrepreneurship, and stimulate inclusive growth.
Speaker: Piyush Nangru
What models of partnership between academia, industry, and government can sustainably advance AI‑enabled education and skill development?
Collaborative ecosystems ensure resources, expertise, and policy align to deliver scalable, future‑ready education.
Speaker: Multiple panelists (e.g., Piyush Nangru, Ashish Gupta, Shweta Chaudhary)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building the AI-Ready Future From Infrastructure to Skills

Building the AI-Ready Future From Infrastructure to Skills

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened with Thomas Zacharia framing the session as a discussion on “building AI readiness from compute to capability,” stressing that AI extends far beyond GPUs and includes the full stack from PCs to edge devices [6-9][11-12]. He introduced the U.S. Department of Energy’s Genesis Initiative, noting that the DOE spends roughly a trillion dollars annually on R&D, of which 20-30 % is government-funded, and that the program aims to use AI to accelerate scientific discovery, energy, and national security research [18-22][23-30]. Zacharia explained that the initiative seeks to federate compute and data across national labs, cloud-enabled labs, and public-private partnerships while embedding security, governance, and composable standards into the infrastructure [34-38][46-48].


He highlighted AMD’s contribution through the American Science Cloud, which will run on an MI355 cluster and a Helios rack delivering 2.9 exaflops of AI compute for 220 kW, illustrating the company’s push for high-performance, energy-efficient hardware [48-54][92-94]. Zacharia also stressed the importance of open ecosystems, open-source software, and open standards to enable startups and innovators to build on AMD hardware without vendor lock-in [70-73][76-81].


Paneerselvam M of the METI Startup Hub described India’s sovereign AI strategy as a layered effort that requires clear intent, curiosity, and implementation, and he positioned startups as “AI natives” that can improve the nation’s AI readiness quotient for SMEs and larger enterprises [106-108][110-112][113-114]. He reiterated the need for a human-in-the-loop approach while acknowledging that the balance may evolve as agentic AI matures [106][108].


Timothy Robson shifted to the software perspective, noting that AMD’s early supercomputing work, such as the Finnish Lumi system with 12 000 GPUs, enabled multilingual LLM training before ChatGPT’s release [135-144][138-143]. He argued that open-source frameworks like PyTorch and the emerging Triton compiler allow developers to run models on any hardware, supporting AMD’s “day-zero” support for new models such as Quen3 Codex and DeepSeek without additional integration effort [156-162][204-211]. Robson also promoted the AMD Developer Cloud, which offers free compute hours, pre-built Docker containers, and accelerator-cloud programs to help startups move from proof-of-concept to production while keeping total-cost-of-ownership low [187-196][200-203].


Gilles Garcia added that AI is moving to the edge and “physical AI” for robotics, autonomous networks, and industrial applications, requiring specialized accelerators that are smaller and more power-efficient than traditional GPUs [230-238][240-244]. He cited the Gene01 humanoid built on AMD technology as an example of how compact accelerators can enable real-time perception and actuation without cloud dependence [239-241].


In his closing remarks, Zacharia urged participants to stay curious, to explore both high-performance and low-power AI solutions, and to collaborate across academia, startups, and industry to drive societal change [247-250].


Keypoints


Major discussion points


AMD’s holistic “compute-to-capability” roadmap for sovereign AI – Thomas Zacharia outlines a vision that goes beyond GPUs to a full stack of AI hardware, software and cloud services, emphasizing public-private partnerships such as the U.S. Department of Energy’s Genesis Initiative and the American Science Cloud, and showcasing AMD’s exascale achievements (e.g., the MI355-based cluster and the 2.9 exaflop Helios rack) [6-12][15-21][34-38][46-48][53-56][61-66][70-73][84-94][95-99].


Government-driven AI research and national security priorities – The talk stresses that AI acceleration is a strategic national function, linking DOE’s three pillars (discovery science, energy, national security) with the need to federate compute and data across labs, academia and industry, and to embed security-by-design in public-private collaborations [16-23][27-33][40-48][49-52].


Start-ups as the engine of AI readiness and implementation in India – Paneerselvam M highlights the critical role of AI-native start-ups in translating sovereign AI strategies into tangible value for SMEs, stressing curiosity, clear intent, and the need to broaden AI benefits beyond large corporates [106-112][113-114].


Open-source software ecosystem and “day-zero” support as the enabler of AI adoption – Timothy Robson stresses that success now hinges on open, vendor-agnostic software stacks (PyTorch, JAX, Triton) and AMD’s developer resources (Developer Cloud, Docker containers, day-zero model support) that let users run new models on AMD hardware without lock-in [124-131][152-159][206-218][221-229].


Physical AI and edge computing for industry and robotics – Gilles Garcia points to the shift of AI workloads from data-center GPUs to low-power, edge-optimized accelerators for robotics, autonomous vehicles, and industrial systems, underscoring AMD’s “AI anywhere” strategy and the need for dedicated, reliable hardware at the far edge [230-239][242-245].


Overall purpose / goal of the discussion


The session aimed to present and promote a comprehensive, government-backed AI ecosystem-spanning sovereign research, high-performance compute, open-source software, and start-up innovation-to accelerate scientific discovery, national security, and economic growth, while forging a collaborative partnership between AMD, the Indian government (METI), and the broader AI community.


Overall tone


The conversation is consistently upbeat, forward-looking, and collaborative. It begins with a formal congratulatory opening, moves into an enthusiastic technical showcase of AMD’s capabilities, shifts to a policy-focused discussion of sovereign AI, then adopts a pragmatic, supportive tone toward start-ups and ecosystem builders, and concludes with a motivational call to stay curious and explore the emerging opportunities. The tone remains optimistic throughout, with occasional shifts from high-level strategic framing to detailed, hands-on encouragement for developers and entrepreneurs.


Speakers

Thomas Zacharia


– Area of expertise: AI strategy, high-performance computing, supercomputing deployment, national AI readiness


– Role/Title: AMD senior executive (speaker)


Timothy Robson


– Area of expertise: Hardware engineering, software development for AI infrastructure, vendor-agnostic AI frameworks


– Role/Title: Hardware engineer turned software specialist at AMD [S1]


Gilles Garcia


– Area of expertise: Physical AI, edge AI for communications, robotics, and industrial applications


– Role/Title: AMD speaker on physical AI


Paneerselvam M


– Area of expertise: Startup ecosystem, innovation management, government-industry partnerships in India


– Role/Title: CEO, METI Startup Hub, Ministry of Electronics and IT, Government of India [S4][S5]


Moderator


– Area of expertise:


– Role/Title: Conference moderator


Additional speakers:


(none identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

The session opened with Thomas Zacharia congratulating the audience on behalf of AMD’s 30 000 employees worldwide, including the 10 000 based in India, and outlining the panel’s purpose – “building AI readiness from compute to capability” [1-6]. He warned against equating AI solely with GPUs, emphasizing that AI spans a full stack from AI-enabled PCs through core data-centre infrastructure to edge deployments [7-10]. Zacharia announced that he would address the sovereign side of AI while his colleague Timothy Robson would cover the enterprise perspective [12-14].


Zacharia introduced the U.S. Department of Energy’s Genesis Initiative, a public-private programme launched under the Trump administration to accelerate scientific discovery, energy research and national security through AI [15-23]. He noted that the United States spends roughly a trillion dollars a year on R&D, 20-30 % of which is government-funded, and that the return on investment is diminishing unless AI bridges the gap between hypothesis and outcome [18-22]. The DOE’s three pillars-discovery science, energy, and national security-are supported by a network of 17 national labs, such as Oak Ridge, whose historic role in the Manhattan Project underscores the agency’s dual focus on energy and broader scientific outcomes [23-30][31-33]. Zacharia argued that modern research must shift from hypothesis-driven experiments to rapid AI-augmented analysis, thereby reducing cost and time while enhancing global collaboration [34-40].


He also highlighted that any federated compute and data platform must be built security-by-design, incorporating confidential-computing capabilities to protect sensitive research and national-security workloads [251]. Zacharia called for composable standards and security-by-design (including confidential computing) to enable trustworthy public-private partnerships [252].


In line with this vision, AMD is contributing the American Science Cloud, a cloud-enabled research platform built on an AMD MI355 GPU accelerator cluster that will host the first exascale-class AI workload – a Helios rack delivering 2.9 exaflops of FP4 AI compute at 220 kW [48-49][92-94]. Zacharia recalled his three-decade career in supercomputing, noting AMD’s history of delivering world-fastest systems, such as the 30 000 NVIDIA GPUs deployed when CUDA was still a novelty [50-56]. He stressed that ambitious, energy-efficient projects are possible because governments fund risky, large-scale hardware development [53-56][86-88].


Beyond hardware, Zacharia underscored the necessity of an open ecosystem and robust governance. He advocated for open-source standards and composable infrastructure that prevent vendor lock-in, noting AMD’s commitment to open hardware and software standards that enable innovators to build on any part of the stack [70-73][76-81]. Governance, he clarified, does not equate to regulation; it requires a human-in-the-loop to validate autonomous AI outputs before they are acted upon, safeguarding scientific integrity and national-security concerns [62-68]. He described an autonomous loop comprising roughly 100 000 GPUs powering 100 000 agents, illustrating the scale of the envisioned compute fabric [253].


Paneerselvam M, CEO of the METI Startup Hub, presented India’s sovereign AI strategy as a layered, five-tier architecture driven by clear intent, curiosity and concrete implementation [106-108]. He highlighted the summit’s massive response-267 000 registrations in five days-as evidence of nationwide curiosity and the desire to embed AI across health, education, skilling and other government functions [110-112]. Paneerselvam positioned start-ups as “AI-natives” that can raise the AI-readiness quotient of SMEs, arguing that the government’s role is to provide an enabling environment so AI benefits are not confined to large corporates [113-114].


Timothy Robson shifted focus to software, observing that the launch of ChatGPT on 30 November 2022 dramatically accelerated AI adoption and made an open ecosystem indispensable [115-119][123-124]. He recounted AMD’s early involvement in the Finnish Lumi supercomputer, which used 12 000 GPUs to train multilingual large-language models before ChatGPT existed, demonstrating that large-scale, multilingual AI can be built with public-sector foresight [135-144][138-143]. Robson stressed that open-source frameworks such as PyTorch, JAX and the Triton compiler allow developers to run models on any hardware, and that AMD’s day-zero support guarantees code runs on AMD out of the box, optimized and validated [156-159][220-221]. He defined day-zero support as “code runs on AMD out of the box, guaranteed and optimized” [254]. Examples of day-zero support include Quen3 Codex, DeepSeek, and Baidu’s Paddle models [255][256]. This support is delivered through AMD’s SG9 runtime, which provides full compatibility with models such as DeepSeek [257]. To lower barriers for start-ups, Robson promoted the AMD Developer Cloud, offering 50-100 free GPU hours, pre-built Docker containers and a seamless path from proof-of-concept to production while maintaining a low total-cost-of-ownership [187-196][197-203]. He distinguished “Neo clouds”-smaller, nimble providers-from hyperscalers, noting their relevance for Indian start-ups seeking flexible, low-cost compute [258].


Gilles Garcia broadened the discussion to “physical AI” at the far edge, arguing that AI workloads are moving from data-centre GPUs to specialised, low-power accelerators embedded in robots, autonomous vehicles and industrial plants [230-238]. He cited the Gene01 humanoid-the first robot built on AMD technology-as proof that edge AI can deliver real-time perception, touch and actuation without reliance on cloud connectivity [239-241]. Garcia suggested that India’s burgeoning start-up ecosystem is well-placed to adopt these edge solutions, leveraging AMD’s portfolio of compact accelerators that combine high performance with minimal power consumption [242-245].


Across the panel, several points of agreement emerged. All speakers championed an open ecosystem and open standards as essential to avoid vendor lock-in and to foster rapid innovation [70-73][124-128][156-158][235-236][239-241]. They also concurred that start-ups, being AI-native, are pivotal for scaling AI adoption and improving the AI-readiness quotient of SMEs[69-71][106-108][178-186][187-196]. Finally, both Zacharia and Paneerselvam affirmed the need for sovereign, public-private AI infrastructures that integrate government, academia and industry to serve national priorities [16-48][106-113].


Nevertheless, nuanced disagreements surfaced. Zacharia’s vision of a centrally federated national AI cloud (the American Science Cloud) contrasts with Robson’s promotion of lightweight, developer-focused cloud resources for start-ups, and with Garcia’s emphasis on decentralised edge accelerators [16-48][187-196][230-235]. A second tension arose between Zacharia’s insistence on human-in-the-loop governance for autonomous agents [62-65] and Robson’s focus on rapid, open-source deployment that does not foreground mandatory oversight [210-218]. Finally, while Zacharia warned against over-indexing AI on GPUs [7-10], Garcia advocated for specialised, non-GPU edge accelerators, highlighting differing hardware investment priorities [7-10][230-235].


Key take-aways: (i) sovereign AI requires public-private partnerships such as the DOE Genesis Initiative and India’s five-layer model to federate compute, data and secure cloud-enabled labs; (ii) start-ups are essential “AI-natives” that can raise the AI-readiness quotient of SMEs; (iii) AMD is delivering low-cost, ready-to-use compute (Helios rack, Developer Cloud, free GPU hours) and day-zero model support to accelerate adoption; (iv) an open, vendor-agnostic software stack (PyTorch, JAX, Triton, Primus) is critical to avoid lock-in [92-94][262]; (v) governance must retain a human-in-the-loop to ensure safe, responsible AI; and (vi) physical AI at the edge demands specialised, low-power accelerators such as those showcased in the Gene01 humanoid.


Action items: AMD will continue to supply high-performance and edge-optimised hardware, maintain open-source toolchains (including the Primus ecosystem) and day-zero support for emerging Indian-language models, and expand the Developer Cloud for start-ups; the METI Startup Hub will deepen its partnership with AMD to accelerate AI uptake among Indian SMEs; both parties will advocate for policies that blend large-scale national compute investments with inclusive, low-cost resources for innovators. Unresolved issues remain around concrete mechanisms for federating compute across labs, detailed governance frameworks for autonomous agents, timelines for India’s sovereign AI architecture, and funding models for large-scale public-private initiatives.


In closing, the speakers collectively emphasized that building a balanced AI ecosystem-spanning compute, software and edge, underpinned by open standards, security-by-design, and inclusive access-is essential to realise the transformative potential of AI for society and industry [247-250].


Session transcriptComplete transcript of the session
Thomas Zacharia

So congratulations to all of you. You should be proud. And I just want to say that on behalf of the 30 ,000 AMDers worldwide, and particularly 10 ,000 in India, I just want to congratulate you and thank you for this opportunity to have this discussion. Since we are a small group, I think we’ll keep it informal. And I want to make sure that somebody please keep track of time so that I do justice to my colleagues here and the dais. The topic that I’ve been asked to talk about is sort of building AI readiness from compute to capability. In AI, in the field of AI these days, there seems to be an over -indexing of AI and GPUs. When in reality, AI is much broader.

GPU is obviously a significant part. It’s a part of the core infrastructure. But what we do at AMD is to really provide a full suite of AI capability from AI on AI PCs to core infrastructure to all the way out to the edge. And I have my colleague Tim from AMD, so we decided that we’re going to tag team. And so I’m going to focus perhaps a little bit on the sovereign side, and then Tim can focus on the enterprise side. That’s okay with you. So let’s just talk about sovereign AI in practice and exploring the motivators. So this particular slide is something that is created by the Department of Energy in the United States as part of a new initiative that was kicked off by the Trump administration called the Genesis Initiative.

and I had a role to play in terms of trying to support and address in crafting this initiative and the framing is very simple. If you look at the top line, I don’t know whether this has a pointer, it’s okay. Okay, so the top line, the white line is funding in the United States for R &D. Today, the United States spends about a trillion dollars a year in R &D. That’s just my involvement. Not all of that is government spending. It’s roughly about, say, 20 to 30 % U .S. government and the rest industry. The bottom line is what we consider research. Output efficiencies. So the problems are getting harder. it is getting more challenging and even though we are trying to tackle really important problems the sense is that throwing money is not having the same rate of return and this slide basically asks the question how do we reduce the gap and at least the thesis for the Genesis mission is that you can use AI as a way to accelerate scientific discovery the Genesis mission has three areas of importance for people who don’t know about the US Department of Energy the US Department of Energy is the nation’s largest physical science agency so it has it operates through 17 national labs and some of the earliest ones, like the one Oak Ridge National Laboratory, which I used to lead before joining AMD, came to being during the Manhattan Project.

And Manhattan Project, about 65 % of the entire funding of Manhattan Project was at Oak Ridge National Laboratory. And in addition to, you know, in fact, I think the Prime Minister mentioned that, about nuclear energy, both the destructive aspect as well as the significant outcomes that came out of that from nuclear medicine to nuclear navy to nuclear energy. These all came, you can trace back to Manhattan Project. So U .S. Department of Energy is not only responsible for energy, but it’s really a science organization. It’s got three priorities. One is just discovery science. The second is energy. And the third is national security. and national security. America has a really interesting thing, a way of keeping the nuclear arsenal away from the military in the sense that it is the U .S.

Department of Energy and not U .S. Department of Defense or Department of War that is responsible for the nuclear arsenal. And the three lab directors, Los Alamos, Livermore, and Sandia, has to certify each year that the arsenal is ready for the President of the United States. So this is a piece of the hypothesis. If you think about research, you can look at the left side. It starts with hypothesis, then you conduct experiments, get the data. And today, you take the data, use AI, machine learning, et cetera, you get analysis. What you’re trying to do is to make this much faster so that you can have science outcomes coming out. That’s it. do it in a reduced cost because you cannot throw more and more money at this problem and enhance global collaboration.

I think there is a genuine interest on the part of the U .S. that this whole premise is not just a U .S. issue. And so I think that it’s likely announcements that suggest that countries like Japan and Europe and UK and others may be part of this overall approach to drive sovereign AI for those aspects of AI deployment and scaling that is uniquely a government or state function. So as I mentioned broadly, scientific discovery, energy and national security. But if you look at the scientific discovery further to the next step, then you will see healthcare, education, skilling, all these things. Fundamentally, a government function. And this is not an easy task because if you think about how these institutions’ research is done, I mentioned large fraction of it in the private sector, a lot of it is done in academia funded by government, and then of course in national labs in the United States, India has its own set of national labs, academia, etc.

So what you need to do is take a look to see how do you integrate all this data? At least the U .S. Department of Energy operates these large, multi -billion dollar light sources, neutron sources, specialized scientific experiments. You need to be able to incorporate all these things, so you have to federate the compute and data, you have to have cloud -enabled lab operations, which is not how things are done today. Security and governance by design, especially when you’re thinking about public -private partnerships, even in the enterprise commercial scale, you want to make sure that you have secure computing, you have confidential computing, you can maintain integrity, but also if you think about national security, you have additional layer, and then you want composable standards versus infrastructure.

So this particular program was kicked out by Secretary Wright, well the President of the United States, then Secretary Wright, and the last fourth quarter of last year, and the first announcement was done with Lisa Su, our CEO, because one of the things that they wanted to do was a unique public -private partnership, and so the core infrastructure, which is currently called American Science Cloud, this program is just being stood up, is going to be run on an MI355 cluster, which is what this entire program that is aimed at driving innovation is going to be run on. And so we are really excited to be a part of this. initially US and soon an international effort to drive innovation in those areas that are uniquely government function.

I’ve had a ringside seat in computing for the last 30 years and been responsible for a lot of supercomputing deployment, a dozen or so. The last four or five of them were number one systems in the top 500, each first of a kind. This is another important thing. Innovation, if you think about AI, AI didn’t happen magically with NVIDIA or AMD. It happened because US government took the risk to invest in first of a kind systems. So we were the first to deploy 30 ,000 NVIDIA GPUs when people thought that CUDA was a four -letter word. Now everybody thinks that this is this amazing software, but change comes hard to people. And so I just want… I want you to know that…

particularly all of you who are youngsters things are going to evolve. If you think that AI is just like the Prime Minister said, it’s just the early stages. So you have to be open and you have to be part of this drive for effective, scalable and impactful AI. Then deep learning came where this mixed precision computation then generative AI and last year was really authentic AI and some of us think that this year we’re going to focus increasingly on governance. Governance does not mean regulation there is a role for regulation by governments but governance is that how do you want if you want to have agentic systems driving, accelerating innovation you want to make sure that the output has a person in the loop.

The one way to simply think about it is that if you are researchers here, if you have a professor who’s got a dozen students who are doing research you don’t let the students just go publish things. There is the professor’s responsibility, there is the peer review committee, etc. So you want that human in the loop before you can update and let this thing to drive innovation while it also allows it to do things that AI does best. So this is how we think about compute to capability, a model of national AI readiness. We want its rest on talent, talent and readiness of talent, giving people access to compute and models. Research enablement is key because you want people to operate AI in an environment where you’re questioning things and innovating all the time as opposed to assuming that what we in the industry is providing you is the only solution.

So I think… If you look at countries that are leading in AI, there is a very strong R &D and innovation foundation that is allowing you to lead because there are people who are questioning every time somebody says something. to make sure that it is validated, it’s continuing to innovate. Start up an innovation lab because you want to take these ideas and start new companies because many of these new innovation and new technologies may be led by people with new ideas and opportunities and of course ultimately enterprise and public sector adoption. We strongly believe, and again I heard Prime Minister Modi say, open ecosystem and open source, open platforms. These things, if you think about iOS and Android, India I find has a lot of penetration of Android systems because inherently open systems allows you to innovate without getting locked into vendors.

And so we at AMD have a commitment to make both our hardware infrastructure and our software infrastructure to be based on open standards so that you can innovate. Around this, any part of this infrastructure. can be part of a new startup or new company adding to that. That is also an important way for India to become part of the supply chain and the semiconductor ecosystem, because you don’t have to start with an attempt to go in for two or three nanometers. You can actually do amazing work and be part of leading -edge technology at different form factors. So I mentioned a little bit about how we think about agendic flows and AI scan work. This is simply the way you think about it.

The inner loop is an autonomous loop where AI and agendic AI does things, what it can do fast, it can operate. If you have 100 ,000 GPUs, you have 100 ,000 agents tackling this problem and it can actually go through the hypothesis -driven experiments and systems. So you can do simulation, campaign scale coordination, machine speed execution, etc. But we do not allow it to update. the outcome until a human in the loop has had the opportunity to validate to make sure that we don’t have unintended consequences. Now, how do you build this thing? So this is, if you haven’t gone to the AMD booth, I would encourage you to do. This is my only plug in this presentation. We spent a ton of money to bring this Helios rack here just so that you can have a sense of what is, not what this particular rack can do, but giving a glimpse of what is possible the next year and the year after.

So we, in 2007, myself and two of my colleagues started what is called the Exascale program. And the challenge was to deliver an Exascale system for under 20 megawatts. Because if you had just scaled the capability in 2007, it would have taken three to four gigawatts. And we knew that the government was not going to… sign, $4 billion for power, just electricity alone to run the computer. So we were motivated to drive that. And we delivered that first exascale system, Valfrontier at Utrich, for less than 20 megawatts. Everybody thought it was crazy, it cannot be done, but there are some things, when you put audacious goals, people rally around and then deliver. This particular rack, in one rack, there are 72 GPUs that will deliver 2 .9 exaflops of AI compute, which is FP4, not FP64, just to be very clear.

But for AI capability, you get 2 .9 exaflops of compute capability for 220 kilowatts. Right? That, even for somebody who’s been in this field for a long time, it’s just mind -blowing. this is where we are headed AI is the fastest adoption of any technology that humanity has introduced we’ve gone from 1 million active users to 1 billion in a matter of just a couple of years and we are headed to 5 billion users so there is a lot of opportunity to innovate in this field and all of us are going to continue to create these opportunities as Lisa said, we are entering the Yara scale so already people are thinking about the next 1000 so let me just say you can get to Zeta scale by just taking 300 of those racks and putting together and then it’s another 3x so I would say in the next 10 years maybe we would be at this 10 ,000 factor so the kind of problems that you are thinking about should not be constrained by what you can do today by the time you figure out the solution for an important problem compute will be there.

That is what we in the industry like to promise you. And I think advancing national economies, these are one of the things that people might you would be forgiven if you thought that does AMD do these things and how prevalent are our compute capabilities. I think Tim is going to tell you that our GPUs and our systems are in every hyperscaler globally and when it comes to HPC and national priority missions, AMD is the leader. If you listen to President Macron, he referenced Alice Recop, which is the first AI factory that the French government announced, the CEA announced, which is based on AMD MI430X, which is a variant of the MI450 on the right that you see outside.

I will close by saying that a shared path forward is really what we are looking for. I know India is in the early stages and we are really delighted to actually have this conversation. Thank you very much.

Moderator

I’d like to invite our next speaker, Paneerselvam M, CEO of the METI Startup Hub at Ministry of Electronics and IT, Government of India. Dr. Paneerselvam M is a distinguished leader with over two decades of expertise in innovation, management, strategic growth and market development. He’s been instrumental in advancing India’s startup ecosystem and fostering impactful partnerships between the government, industry and entrepreneurs. In his

Paneerselvam M

drawing insights out of this data and then comes the interface layer where most of it is going to be really driven by agents, by agentic AI and of course as Thomas mentioned there is always going to be a human in the loop perspective but as we progress this is going to change as well. So you know the two fundamental things that I want to share, one is the entire transformation in the readiness space for AI is an opportunity for you know certain change and intent needs to be very very clear and then comes the curiosity to learn about this little bit more for each business owner and then comes the implementation part of it and then start -ups have a very very critical role to facilitate this because you are coming in almost as AI natives, working with an understanding of this and you can really go out and demonstrate value.

and help implement the entire readiness, improve the readiness quotient or improve the readiness quotient for small and medium enterprises and ensure that, you know, this is a broad -based growth opportunity for businesses across the country and not limited to just a few of them, right? So there is huge potential and I think enough has been spoken. The summit itself is a proof of the kind of curiosity. We have had 267 ,000, you know, registrations, people who have registered in the last five days. Unexpected, overwhelming response to some extent that we couldn’t really handle it, right? At the same time, it gives us immense pride and excitement for the amount of curiosity and excitement for the amount of curiosity and excitement for the amount of curiosity and the youngsters in India, across India.

travel here from the length and breadth of country to understand what is AI going to be and how this is going to impact and what the opportunities are and that is itself is a fantastic starting point and and as I said you you know there’s a lot of happens this is in Indian sovereign models coming the tech the five layers the infrastructure the design you know all the layers are being worked upon in the Indian context and we are ready as a nation we are ready as a government to facilitate this this real disruptive transformative technology but the truth is it’s you know it’s also important for it has to populate into all the layers of the society beat just not just limit itself to large and large corporates but also to to small and medium enterprises and of course it is already populated with the large and medium enterprises and of course it is already populated well into the d2c to the individual users and it’s much much beyond the beyond the chat, GBTs of the world.

So with that, I think I once again take the opportunity to thank the entire team from AMD and we have had some interesting conversation and I look forward for the continued partnership with AMD and METI Startup Hub because in our perspective, corporates have a huge role to play in the success of the startups. Thank you.

Timothy Robson

Thank you. There’s a couple of things that I want you guys to think about as I go through my talk. 30th of November, 2022. The world changed. ChatGPT was launched. And I’m willing to bet that everyone in this room, myself included, what we thought we knew about AI changed. Two years ago, what we thought even after ChatGPT changed. A year ago, what we thought changed. And what I’m hoping is as you leave here, other things would have changed in the last 45 minutes of listening to these talks. Okay, so I’m going to skip through the reason why we need to go through and need compute. But I think one thing that is very, very, very important is things are moving so fast.

And things are moving in a way that we cannot predict that the only way that anybody is going to be successful is an open ecosystem. And, you know, both of these themes. And I think the other gentleman before me have alluded to this as well. and I’m going to take you through specifically around software. I mean, everything to do with AI really, I’m a hardware guy, I used to design chips, but everything today is software, right? And I was talking to one of my colleagues and I said, okay, so I’m going to India, I’m going to do all this, we’re going to go through. And I said, is it really going to be the, you know, are they going to understand it?

And he said, Tim, India is software. This is what we do. He said, you’re going to be in front of the best people in the world that are going to understand what you want to talk about. So I’m really going to focus on the software side of that. And one of the things that I wanted to do, understanding that we had our esteemed colleague from MITEI here, is we do have lots and lots of experience in this space. And one of the things that I want to highlight is some work that we did with Lumi in Finland. Now, why is this important? So within Europe, almost all the languages are Indo -European, right? If you know a little bit of Greek, if you know a little bit of Latin, if you know a little bit of one of the languages, there’s 27 countries in Europe.

so let’s call it 27 languages and then you have Finland Finland is a Uralic language nothing to do with any other language in Europe absolutely different construct, different base different absolutely everything and so what we found working with the guys in Finland is they were coming to us because they put in this Lumi supercomputer and they said okay so we have a small country in Europe, 5 million native speakers and we have to take all of this work that’s been done English, big codex Spanish, big codex, Hindi, English big codex of all of that to do your training, suddenly you have a language of 5 million people how do you get that language into your LLM model so that it becomes useful now I’m probably going to get the pronunciation really really wrong here okay but I did actually use chatGPT to look at the 22 Indian languages right so if we look at Bodo, Konkani, Dogri, Sindhi, Nepali there’s less than 5 million people that speak those languages so how do you get an Indian LLM that caters for everybody as we’ve seen from President Modi AI for all of that kind of thing and this is the kind of area that with Mighty this is where we would like to work with you guys and be able to bring some benefit of the work that we’ve been able to do now remember the first day 30th of November 2022 this machine was inaugurated so it was put together all of the systems were put together it was all brought up the chips were made years before this machine was inaugurated my birthday 13th of June 2022 6 months before ChatGPT came out so this machine with 12 ,000 GPUs that had the foresight from the Finnish government was using AMD technology to run AI before ChatGPT came out.

So a lot of people that think that a lot of the stuff from AI has come from a specific area. This again, think of our way of thinking. We were there and we have the ability. We actually did the Bloom 176 billion parameter model. It was an open model made for European languages. So again, we would love to bring this knowledge and use with the Indian ecosystem to make this successful for everybody. I’m not going to spend a lot of time on hyperscalers. They’re obviously an important part of the market. It’s where a lot of the capabilities go into. We’re there. We have tens of thousands of GPUs. We actually have, as Thomas mentioned, we have the Helios system coming here.

Please go and take a look at it. If you like Harvard, it’s an interesting piece of kit. But really the idea here is whether you’re in a hyperscaler… or whether you’re in any other area, there is an ability to have a wider ecosystem. And again, inference, so AMD specifically, it’s not really an AMD pitch, but there was an idea in the market that AMD was inference only. That dates from Q1 2024. That’s two years old. So again, we have to kind of change that thinking, right? That’s older thinking. We actually now, again, completely open source. There’s a Primus ecosystem or a Primus tool for the open source, which enables you to be able to do all of the training that you need to do for all of your Indic languages or for your use cases, which again is completely open.

Enterprise AI. This one I think is an interesting one. I know when I started going out to customers and going out to enterprise customers, the difference in customer knowledge on what AI was, was amazing. You go into one customer and they say, okay, so this is our use case and we’re seeing these kinds of sizes of matrices, so we’re doing these optimizations. And then you go into another customer and you say, what are you doing around AI? And the guy goes, oh yeah, we’re doing Gen AI. Okay, great, yeah, what are you doing with Gen AI? We’re using LLMs. Okay, great, so using LLMs, what do you think? LLMs. And they had no idea, right?

It’s just, we have to do something with AI. And that has changed over the last 18 months and chatbot was something that most people said, okay, that makes sense, I understand chatbot, we can fine tune the model, we can do an internal AI system within the company. And now we’re starting to see with the agentic workflows this entire plethora of different use cases coming through. And so how then do you take it from a research institute or people that actually get onto your accelerator, whether that’s a GPU or a TPU or an FPGA or whatever else? and get it to a stage where actually people within a corporation can use it. And so this is something that has been understood.

And again, no lock -in, open, everything here is something that can be used without having to tie you into one particular area. And actually, I’ll come on to it a little bit later as well. It’s also something that I’ve been very impressed with, with the infrastructure that MITEI have put into place. In this case, with the public -private partnership, you have GPUs, you have TPUs, you have Inferentia, you have all of the different types of accelerators available to you within the Indian ecosystem that MITEI have made available to you. I’ll come on to that a little bit later more. But again, the idea here is that whatever the ecosystem is, or whatever the compute that you’re using, you’re able to go from an area where, whether it’s in the cloud or whether it’s non -prem, you have an ability to be able to give your employees within your enterprise an ability to be able to use that AI assistant or tool.

Neo clouds so these are the kind of what we call the smaller clouds, you know, they’re not the hyperscalers they’re a little bit more nimble they are a little bit more available to doing things a little bit different a lot of these guys are doing sort of bare metal and managed Kubernetes services, but it is coming to areas where they’re becoming like APIs, token factories there’s an ability for these guys to be able to provide you with compute quickly easily and at reasonable pricing to enable you in whatever it is you’re trying to do we find these are the first movers in the market and again in the same way that we’re integrated and working with the hyperscalers, we have these relationships with the Neo clouds and actually we’re working with quite a few of the guys here in India as well to make that available for you as well, so the whole idea again here is there is that compute that’s available please go out and understand the benefits or the trade -offs between the different types of Kubernetes services that you have out there and get the right solution for you guys.

Now, I’m assuming that most people here are going to be startups. And again, startup is an interesting area, right? So you have a startup, you know what you want to do, you absolutely are laser focused on getting your MVP out there, getting in front of customers, how do you generate some value, how do you generate some revenue? Although that these days is less and less important, it seems, as people get funding even sometimes before a product. But one of the things that you guys have to be sure of is that the compute that you have and the capabilities that you have are capable for the products that you actually have to then go and put into position.

And so this is an area where we understand that proof of concept, it’s very important. And again, I was chatting with the CEO of Mighty here before, it’s something he was saying, you know, POC to PO. You know, you have to be able to make sure that you understand the technology and how you can take that to market before you can actually go and invest. So we have a couple of different ways that we can help here within the ecosystem. You could actually go on there right now, there’s the AMD Developer Cloud. You can get, I think it’s 50 or 100 hours of free compute. You want to go on, how does AMD work, you know. It’s always going to be dependent on use case and what you’re trying to do.

But there is a huge TCO advantage, which of course is important for startups. Get onto the Dev Cloud, get it working. We actually provide Docker containers, so that’s everything put into a single Docker. So you can download a Docker and run it, so you don’t have to spend your time and your energy installing all of the software, putting everything together, get everything working. We’ve done all of that for you. Take the Docker down, get your model off of Hugging Face, get your weights off of Hugging Face. Use your own model and do something else. Whatever there is that’s in there, in the open source ecosystem is there and it’s going to work. Give it a go.

Give it a play. And then of course from that we can… can take you into our accelerator cloud a little bit more sort of hands -on, making sure we understand what you’re doing, helping, guiding, and assisting you in moving yourself forward there. And then, of course, we have the relationships in with the industry, you know, try and buys, being able to get you access to the computer, being able to get you the right solution at the right kind of price. So this is something also that I really want to highlight. So day zero support of models. Now, we announced this. So Quen3 Codex came out last week, day zero support on AMD. Baidu came out with one of their paddle models this week, day zero support on AMD.

What does day zero support mean? Well, it means that it’s not the first time we’ve seen this code. It runs on AMD. It’s guaranteed. It’s optimized. you know a lot of people think that to run something in AI you need a specific GPU the whole point is with day zero support absolutely false right again with Lumi pre -chat GPT in 2022 we were building LLMs for effectively an Indic type language languages right and so the ability is there if there’s a new model coming out you want to run it you want to test that you want to see how it works for you guys then that is there and runs out of the box and you know again if we look at this line in the middle you know PyTorch if you look at the history of PyTorch you know there were lots of signatories on PyTorch to make sure that was available for everybody AMD was one of them this mainly comes out of Microsoft and Meta who did not want to be closed in to a single supplier so actually what you’re doing with PyTorch is you’re writing Python code right you’re not writing vendor specific code it’s an open ecosystem that’s the whole point right you don’t want to be tied in you know it’s gonna slice for innovation it’s going to increase So PyTorch came out and that is the basis of 99 % of all of the customers I talk to, right?

They’re all writing Python under PyTorch. JAX is then coming forward. Triton, this is a Python -like language which is specific for gem optimization. Again, if you’re getting to that area where you’re actually seeing the gem sizes that are coming through from your operations and want to do gem -level operations, then Triton enables you to do that at the compiler level. So then you can be completely agnostic of what the underlying hardware is. The ecosystem and the underlying compute becomes kind of abstracted away because Triton enables you to run on anybody. It’s just a compiler for the new architecture. If we look at these models on the bottom here, President Modi this week has announced the first 12 Indian languages.

I can’t wait to get you guys here. right, fully supported day zero support, you know just to give you an example here, DeepSeek of course when DeepSeek came out, they did some things a little bit special multi -head latent attention was new we had day zero support with DeepSeek why? Because we’re one of the main contributors to SG9 there’s not a tie -in to an inference engine here, it’s an open ecosystem so we were able to come out of the door with better TCO, better performance, better cost and full support through SG9 on that DeepSeek model which was you know, the leader of its time because of our complete commitment to the open ecosystem just to give you an idea, again you’re walking out of here in 45 minutes with changed ideas, this is what we’re going for here I did have two minutes I now have five, I don’t know who bought me extra time but I owe you a beer Okay, so really actually that’s kind of the end of the pitch here.

One thing I would say is we do have a booth here at 5 .10. I’m sorry, I’m going to do a little bit of an AMD plug at the end here. But do come by and see us. You know, we actually have some of the neoclads there. We have some model creators, vendors, some ecosystem partners there. You know, come see, come change your mind. Come see what’s available within an ecosystem with the compute that’s available for you guys. Okay, thank you.

Gilles Garcia

So first of all I’m Gilles Garcia I’m French so we can talk about LLMs for French language if you want so I’m French, I’m based in France but I’m covering worldwide and I’m focusing on physical AI for the communications and robotics and industrial so we have been talking a lot about AI and most of the people are thinking AI means GPUs, big cloud and what we are seeing is a big shift, that’s another change that we are seeing, change management, so I’m the change management first but changing is we are seeing the AI moving into the edge and moving into the far edge which is industrial, robotics vehicles as well as the networks and so for that you need to have different type of beast, GPUs is one aspect of it but you need to have very profound different technology that AMD has as part of the bread portfolio that we have, these technologies need to be able to send to the market and we need to have a lot of that are able to send the data to the market and we need to have a lot of and we need to have a lot of that are able to send the data to the market and we need to have a lot of that are able to send the data to the market that are able to send the data to the market act, react in a so quick manner that there is no time to go back to the cloud for that.

And so these technologies need to be, of course, that will be inference, but need to be able to take decisions and act very safely, reliability, reliable, without having to rely on the cloud. And so that’s a new change that we’re seeing at AMD on the physical AI, which will become very, very important for us, is how do we take what we have learned in the cloud, and how do we make it available in the physical AI? Software is a big thing. Full stack, meaning hardware and software being able to deliver to the customer solutions, is what AMD is aiming for. And so our CEO, Lisa Su, was saying, it’s AI anywhere. And one size does not fit all.

Meaning that if you want to address a robot you can put a GPU into it, it will burn to hell. So you need to have a very dedicated accelerator with a high software stack, open source, that will be able to have this robot perceiving, visualize, act, touch and be able to act accordingly to what his purpose will be. At CES early January, Lisa Su brings on stage Gene01, which was the first humanoid built on AMD technology. That’s just impressive. Everything has been done by a startup in Italy to make this humanoid being able to sense, visualize, touch when somebody is touching it and when it’s touching something to act and react very rapidly without having to rely on the centralized source.

So I will not be longer than that. Physical AI is probably something that India, by the way, will have a lot of things to act into. Because GPUs are there already where physical AI is something that you will have to create. A lot of things related to medical, related to autonomous networks, autonomous cars, autonomous plants, industrial, and that’s where I think India will start, with all the startups and capability to use accelerators that are much smaller than what GPUs are, and this is all available today in the EMD portfolio. So I will stop here, encourage you to come to the EMD booth, and we can continue the discussion. Thank you.

Thomas Zacharia

Well, so we gave you a lot of information on AI, gave you four different accents, I think the French guy probably carries today. But my one message is that stay curious, as all of us have said, things are going to change and continue to change at a rapid pace. And, you know, people talk about so many thousands of GPUs, that will not be the main thing. and I think that’s something that we need to because you will find that we there’s a whole lot of interest in trying to provide you with even more powerful GPUs for their infrastructure while at the same time provides you very lightweight low power at the edge and so I think stay curious look from the from a start -up community point of view for a research point of view but academic point of view look for really interesting problems challenges to deliver the infrastructure that you need because ultimately this applications with where it is going to change society and life that’s all thank you very much Thank you.

Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (12)
Factual NotesClaims verified against the Diplo knowledge base (3)
Additional Contexthigh

“Zacharia introduced the U.S. Department of Energy’s Genesis Initiative, a public‑private programme launched under the Trump administration to accelerate scientific discovery, energy research and national security through AI.”

The knowledge base confirms the existence of a DOE‑led Genesis Mission that mobilises all 17 DOE national laboratories and partners such as Google DeepMind, but it does not specify that the programme was launched under the Trump administration; the launch timing is not detailed in the sources.

Confirmedhigh

“The DOE’s three pillars—discovery science, energy, and national security—are supported by a network of 17 national labs, such as Oak Ridge, whose historic role in the Manhattan Project underscores the agency’s dual focus on energy and broader scientific outcomes.”

The knowledge base notes that the Genesis Mission involves 17 DOE national laboratories [S11] and that Oak Ridge National Laboratory played a major role in the Manhattan Project, receiving about 65 % of its funding [S5], confirming both the lab count and Oak Ridge’s historic significance.

Additional Contextmedium

“Any federated compute and data platform must be built security‑by‑design, incorporating confidential‑computing capabilities to protect sensitive research and national‑security workloads.”

Sources highlight the importance of secure-by-design ICT procurement and note existing gaps in security-standard implementation [S87], and they also reference confidential-computing features in new hardware offerings such as Fujitsu’s servers [S89], providing additional context for the security-by-design claim.

External Sources (90)
S1
Building the AI-Ready Future From Infrastructure to Skills — Timothy Robson, a hardware engineer who transitioned to software, reinforced the importance of vendor-agnostic developme…
S2
Building the AI-Ready Future From Infrastructure to Skills — – Thomas Zacharia- Gilles Garcia
S3
Advocacy to Action: Engaging Policymakers on Digital Rights | IGF 2023 — By engaging policymakers and parliamentarians, Garcia provides them with evidence of rights violations to support her ca…
S4
Building the AI-Ready Future From Infrastructure to Skills — -Paneerselvam M- CEO of the METI Startup Hub at Ministry of Electronics and IT, Government of India; distinguished leade…
S5
https://dig.watch/event/india-ai-impact-summit-2026/building-the-ai-ready-future-from-infrastructure-to-skills — I’d like to invite our next speaker, Paneerselvam M, CEO of the METI Startup Hub at Ministry of Electronics and IT, Gove…
S6
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S7
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S8
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S9
Building the AI-Ready Future From Infrastructure to Skills — – Timothy Robson- Thomas Zacharia
S10
The Global Power Shift India’s Rise in AI &amp; Semiconductors — – Thomas Zacharia- Rahul Garg – Vivek Kumar Singh- Thomas Zacharia
S11
Google DeepMind partners with DOE for AI-driven science — Google DeepMind ispartnering with the US Department of Energy(DOE) to support the White House’s Genesis Mission, a natio…
S12
The fading of human agency in automated systems — In practice, however, being “in the loop” frequently means supervising outputs under conditions that make meaningful jud…
S13
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S14
Keeping AI in check — Societies should not be forgetful of the fact that technology is a product of the human mind and that the most intellige…
S15
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — This comment is exceptionally thought-provoking because it addresses the critical tension between AI efficiency and publ…
S16
Driving Social Good with AI_ Evaluation and Open Source at Scale — Audience members repeatedly stress that humans are needed to evaluate prompts, identify system gaps, and craft test case…
S17
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S18
Leveraging the UN system to advance global AI Governance efforts — Daren Tang:Thank you, Reinhard, and thank you, Doreen, for leading us on this important conversation. Very happy to meet…
S19
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S20
Responsible AI for Shared Prosperity — “The research and development capability, which I was in the first instance, and that was an amazing initiative because …
S21
[Tentative Translation] — –  In order to promote the creation of needs-pull innovation by the government, the government will promote the new Jap…
S22
Contents — Entrepreneurs deliver fresh ideas and rethink commerce. The networking of their innovative skills with established compa…
S23
1 Introduction — Improving the functioning of national and regional innovation ecosystems is a prerequisite for increasing ex…
S24
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — All right. How am I doing on time? Maybe I have 10 minutes. So let me talk a little bit on data center. I don’t see the …
S25
Panel Discussion Data Sovereignty India AI Impact Summit — By domestic, which is because in the age of AI, I strongly believe that the sovereign AI compute infrastructure has beco…
S26
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “What raw material is needed for AI?”[9]. “sovereign AI comes to India, we’ll have the control”[56]. “Indian government …
S27
Secure Finance Risk-Based AI Policy for the Banking Sector — Trust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with pub…
S28
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Anastasiadou argued that sandboxes are “particularly beneficial for SMEs,” addressing a critical gap in the innovation e…
S29
Can National Security Keep Up with AI? / Davos 2025 — As the conversation concluded, it was clear that the intersection of AI and national security presents a complex landsca…
S30
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Digital networks and AI developments are critical assets for countries worldwide. Thus, they become central to national …
S31
Driving Indias AI Future Growth Innovation and Impact — I don’t think it’s necessarily about security. You know, it’s really about saying how many of those have real use cases?…
S32
Indias AI Leap Policy to Practice with AIP2 — “This is essentially what we provide for startups.”[16]. “And startups become a very, very important player in this game…
S33
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — Economic | Infrastructure | Development PayPal chose to use open source protocols because it attracts the best talent t…
S34
The Innovation Beneath AI: The US-India Partnership powering the AI Era — This historical analogy brilliantly challenges the current centralized AI paradigm by suggesting that today’s massive da…
S35
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Artificial intelligence | Information and communication technologies for development Arun advocates for moving inferenc…
S36
Building the AI-Ready Future From Infrastructure to Skills — The programme’s implementation through the American Science Cloud, powered by AMD’s MI355 cluster, demonstrates public-p…
S37
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The government’s response through the India AI Mission has established a shared compute framework providing access to 38…
S38
UK AI plan calls for AI sovereignty and bottom-up developments — The UK government has launched an ambitiousAI Opportunities Action Planto accelerate the adoption of AI to drive economi…
S39
AI Without the Cost Rethinking Intelligence for a Constrained World — Beyond 131,000 context window, CPU-based solutions with new algorithms can outperform GPU-based systems GPU-based infra…
S40
WS #208 Democratising Access to AI with Open Source LLMs — Developing countries face challenges in implementing open source AI due to limited infrastructure and technical expertis…
S41
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Given the lack of GPUs and data centers in the Global South, new business models need to be developed that allow for sha…
S42
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S43
Part 2.5: AI reinforcement learning vs human governance — In contrast, human governance involves learning through historical experience, cultural evolution, and institutional dev…
S44
Driving Social Good with AI_ Evaluation and Open Source at Scale — Moderate disagreement with significant implications. The disagreements reflect deeper tensions between technical efficie…
S45
Keynote-Julie Sweet — This distinction is philosophically profound and practically important. ‘Humans in the loop’ suggests a reactive, compli…
S46
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Diana Nyakundi:Yeah, thanks Fadi. So with regards to opportunities, there are a lot of AI pilot projects that are coming…
S47
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Portugal offers something increasingly rare, agility with stability. A country large enough to scale, yet compact enough…
S48
Driving Indias AI Future Growth Innovation and Impact — So as we step back and look at what are the key elements of what a country and companies need to do, there really are th…
S49
AI That Empowers Safety Growth and Social Inclusion in Action — The discussion revealed tension between framework proliferation and the need for practical implementation guidance. Diff…
S50
Is the AI bubble about to burst? Five causes and five scenarios — Centralised, closed platforms vs. decentralised, open ecosystems. Historically,open systems often win in the long run– …
S51
New plan outlines how India will democratise AI infrastructure — Indiais moving to rebalance access to AI infrastructureas part of a new national push to close gaps in computing power a…
S52
Mind the AI Divide: Shaping a Global Perspective on the Future of Work — A limited number of countries are leading the way in developing compute capacity, while many others are beginning from a…
S53
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — “as we go from one gig to nine to ten gig … land water and power …”[30]. “defining India’s access to compute, access…
S54
Catalyzing Cyber: Stimulating Cybersecurity Market through Ecosystem Development — However, there are concerns about standards becoming barriers for smaller businesses and new entrants in the digital mar…
S55
High Level Session 2: Digital Public Goods and Global Digital Cooperation — All speakers consistently emphasized that Digital Public Goods must be built on open source principles and collaborative…
S56
From summer disillusionment to autumn clarity: Ten lessons for AI — Evidence continues to mount that more computing power cannot overcome core LLM limits—fragility under adversarial prompt…
S57
Developing capacities for bottom-up AI in the Global South: What role for the international community? — **Amandeep Singh Gill**, UN Tech Envoy, provided the institutional perspective and outlined the Secretary-General’s upco…
S58
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip man…
S59
Multi-stakeholder Discussion on issues about Generative AI — It is crucial for individuals to understand how to utilize AI and other technological advancements effectively and respo…
S60
AI: Lifting All Boats / DAVOS 2025 — Brad Smith: In my opinion, just my opinion, but first of all, that your book is great. And there’s another book that …
S61
AI/Gen AI for the Global Goals — Priscilla Boa-Gue argues for the creation of supportive policy environments to foster AI startups. This includes develop…
S62
Indias AI Leap Policy to Practice with AIP2 — Just prior to this I was having a conversation with a large corporate and how they can actually use startups as a cataly…
S63
Building the AI-Ready Future From Infrastructure to Skills — This discussion focused on building AI readiness and capabilities, featuring speakers from AMD and the Indian government…
S64
The Global Power Shift India’s Rise in AI &amp; Semiconductors — -Thomas Zacharia(Dr. Thomas Zakaria): Senior Vice President for Strategic Technical Partnerships and Public Policy at AM…
S65
Can National Security Keep Up with AI? / Davos 2025 — As the conversation concluded, it was clear that the intersection of AI and national security presents a complex landsca…
S66
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Public-private partnerships play a key role in these collaborations. Public-private partnerships were considered crucia…
S67
Driving Indias AI Future Growth Innovation and Impact — I don’t think it’s necessarily about security. You know, it’s really about saying how many of those have real use cases?…
S68
Indias AI Leap Policy to Practice with AIP2 — “This is essentially what we provide for startups.”[16]. “And startups become a very, very important player in this game…
S69
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And I want this. The most important thing that I want people to understand is… just because, and I think that the, you…
S70
WS #208 Democratising Access to AI with Open Source LLMs — Daniele Turra: Sure, actually, I was about to introduce some of the points that might help in that sense in this foll…
S71
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — Economic | Infrastructure | Development PayPal chose to use open source protocols because it attracts the best talent t…
S72
The Innovation Beneath AI: The US-India Partnership powering the AI Era — This historical analogy brilliantly challenges the current centralized AI paradigm by suggesting that today’s massive da…
S73
CES 2026 shows AMD betting on on-device AI at scale — AMD used CES 2026 to positionAI as a default featureof personal and commercial computing. The company said AI is no long…
S74
Designing Indias Digital Future AI at the Core 6G at the Edge — The convergence of AI and 6G will create a distributed computing fabric that extends far beyond traditional network boun…
S75
Opening of the session — Technology transfer is essential for capacity building in developing countries. The delegation commenced by expressing …
S76
Skilling and Education in AI — A technology company representative highlighted the critical importance of building comprehensive AI infrastructure with…
S77
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Brandon Soloski: Okay, that’s interesting. I hear a little bit of a delay. Good idea. All right. Good afternoon, early…
S78
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — The panel addressed the fundamental tension between AI’s probabilistic nature and enterprise requirements for determinis…
S79
White House launches Genesis Mission for AI-driven science — Washington prepares for a significant shift in research as the White Houselaunches the Genesis Mission, a national push …
S80
A Global AI in Financial Services Survey — Indeed, Figure 2.17 shows that there seems to be an almost constantly positive relationship between investing in AI and …
S81
The Government’s AI dilemma: how to maximize rewards while minimizing risks? — Emma Inamutila Theofelus:Thank you so much, Robert. And I’m very happy to be on this panel with Neeraj and Mercedes, esp…
S82
AI for equality: Bridging the innovation gap — This comment is strategically insightful because it reframes women’s inclusion from a moral imperative to a business opp…
S83
Government AI investment grows while public trust falters — Rising investment in AIis reshapingpublic services worldwide, yet citizen satisfaction remains uneven. Research across 1…
S84
https://dig.watch/event/india-ai-impact-summit-2026/keynote-jeet-adani — She rises to stabilize, she rises to anchor a world searching for balance and she rises to build systems that are inclus…
S85
Towards 2030 and Beyond: Accelerating the SDGs through Access to Evidence on What Works — The discussion highlighted the transformative potential of AI and other digital technologies in accelerating evidence sy…
S86
AI Algorithms and the Future of Global Diplomacy — This observation sparked a deeper conversation about technological sovereignty and geopolitical risks in AI adoption. It…
S87
Dynamic Coalition Collaborative Session — Wout de Natris highlighted a concerning gap between available security standards and their implementation, noting that m…
S88
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S89
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — In the computing domain, Mahajan detailed Fujitsu’s hardware roadmap, beginning with their 2-nanometer ARM-based servers…
S90
IGF 2017 – Best practice forum on cybersecurity — Mr Belisario Contreras, Cyber Security Program Manager at the OAS, commented that cybersecurity is part of the IGF agend…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Thomas Zacharia
8 arguments129 words per minute2769 words1283 seconds
Argument 1
Sovereign AI Infrastructure – Emphasizes the need for a government‑driven, public‑private partnership (U.S. DOE Genesis Initiative) that federates compute, data, and secure cloud‑enabled lab operations to accelerate scientific discovery, energy, and national security.
EXPLANATION
Thomas argues that AI readiness at the national level requires coordinated public‑private effort, integrating massive scientific infrastructure with secure, federated computing resources. This approach is positioned as essential to maintain the return on R&D investment and to drive breakthroughs in science, energy and security.
EVIDENCE
He describes the Genesis Initiative launched by the U.S. Department of Energy, noting its goal to use AI to accelerate scientific discovery and its structure as a public-private partnership that must federate compute, data and cloud-enabled lab operations, incorporate secure and confidential computing, and run on the American Science Cloud built on an MI355 cluster [16-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Genesis Initiative and DOE partnership are described in S1 and S11, illustrating a public-private effort to federate compute, data and secure cloud for science, energy and security. [S1][S11]
MAJOR DISCUSSION POINT
National AI infrastructure through public‑private partnership
AGREED WITH
Paneerselvam M
Argument 2
Governance with Human‑in‑the‑Loop – Calls for AI governance that keeps a person in the loop for validation, ensuring safe, responsible deployment of autonomous AI systems.
EXPLANATION
Thomas stresses that AI systems should not operate autonomously without oversight; a human must validate outputs before they are acted upon. This safeguards against unintended consequences and maintains trust in AI‑driven outcomes.
EVIDENCE
He explains that governance means keeping a person in the loop, using the example of a professor supervising student research and peer-review before publication, to ensure safe and responsible innovation [62-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity and challenges of human-in-the-loop oversight are examined in S12, and the need for human evaluation to ensure trustworthy AI is highlighted in S16. [S12][S16]
MAJOR DISCUSSION POINT
Human oversight in AI governance
Argument 3
Human‑in‑the‑Loop Validation – Highlights the necessity of keeping humans involved in the validation loop to prevent unintended consequences and to maintain trust in AI‑driven scientific and commercial outcomes.
EXPLANATION
Thomas reiterates that human validation is essential to avoid accidental harms and to preserve confidence in AI‑generated results. This principle applies across scientific research and enterprise deployments.
EVIDENCE
He repeats the need for a human in the loop, citing the professor-student oversight model as a concrete illustration of validation before AI outputs are released [62-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human validation as a safeguard is discussed in S12, while S16 stresses human evaluation as essential for reliable AI systems. [S12][S16]
MAJOR DISCUSSION POINT
Ensuring AI outputs are human‑validated
Argument 4
AI is broader than GPUs – need a holistic AI ecosystem.
EXPLANATION
Thomas points out that the current discourse over‑emphasizes GPUs as the sole driver of AI, while AI encompasses many other components and layers. He argues that a broader view is required to build true AI readiness.
EVIDENCE
He observes an over-indexing of AI on GPUs, noting that AI is much broader and that GPUs are only a part of the core infrastructure, while AMD provides a full suite of AI capabilities from PCs to the edge. [7-10]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Zacharia’s distinction between compute and capability and the broader AI stack is noted in S10, and S24 describes data-center AI capabilities that extend beyond GPUs. [S10][S24]
MAJOR DISCUSSION POINT
Broad AI ecosystem beyond GPUs
AGREED WITH
Gilles Garcia, Timothy Robson
Argument 5
Commitment to an open ecosystem and open standards to foster innovation.
EXPLANATION
Thomas stresses that AMD’s strategy is built on openness, both in hardware and software, to avoid vendor lock‑in and enable a vibrant ecosystem of developers and startups.
EVIDENCE
He states that AMD is committed to making both hardware and software infrastructure based on open standards, supporting open source and open platforms so innovators can build without being locked into a single vendor. [70-73]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AMD’s pledge to open standards is quoted in S1, and the importance of open, interoperable AI protocols is discussed in S19. [S1]<a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S19]
MAJOR DISCUSSION POINT
Open ecosystem and standards
AGREED WITH
Timothy Robson, Gilles Garcia
Argument 6
Developing talent and research enablement as the foundation for AI readiness.
EXPLANATION
Thomas argues that national AI readiness rests on skilled talent and on providing researchers with environments where they can constantly question and innovate, rather than relying solely on industry solutions.
EVIDENCE
He says AI readiness rests on talent, giving people access to compute and models, and that research enablement is key for continuous questioning and innovation. [66-68]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Building research capacity and talent as the first pillar of AI sovereignty is emphasized in S20, with S1 also linking skills development to infrastructure. [S20][S1]
MAJOR DISCUSSION POINT
Talent and research enablement
Argument 7
Supporting start‑up innovation labs to translate ideas into new companies.
EXPLANATION
Thomas highlights the importance of creating innovation labs where ideas can be nurtured into startups, which in turn drive enterprise and public‑sector adoption of AI technologies.
EVIDENCE
He recommends starting innovation labs so that new ideas can become companies, noting that many emerging technologies are led by startups and eventually adopted by both enterprise and the public sector. [69-71]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of start-ups in innovation ecosystems and the need for supportive funding are described in S22. [S22]
MAJOR DISCUSSION POINT
Start‑up innovation labs
AGREED WITH
Paneerselvam M, Timothy Robson
Argument 8
Energy‑efficient exascale computing demonstrates sustainable high‑performance AI.
EXPLANATION
Thomas describes the 2007 Exascale program’s goal of delivering an exascale system under 20 MW, emphasizing that sustainable power consumption is essential for large‑scale AI infrastructure.
EVIDENCE
He explains that the Exascale program aimed to deliver a system under 20 MW (instead of the gigawatt levels that scaling would have required) and succeeded with a system using less than 20 MW, showing that audacious goals can be met sustainably. [86-88]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The exascale program’s sub-20 MW power target and its relevance to sustainable AI compute are detailed in S1. [S1]
MAJOR DISCUSSION POINT
Energy‑efficient exascale systems
P
Paneerselvam M
4 arguments156 words per minute534 words205 seconds
Argument 1
India’s Sovereign AI Model – Highlights India’s five‑layer sovereign AI architecture and the government’s commitment to enable AI across all societal sectors, ensuring that the initiative is not limited to large corporates.
EXPLANATION
Paneerselvam outlines a comprehensive, five‑layer AI framework that the Indian government is building to serve every segment of society, from large enterprises to SMEs and individual users. He stresses that this model aims for inclusive, nation‑wide AI adoption.
EVIDENCE
He mentions that India is developing a five-layer sovereign AI architecture, that the government is ready to facilitate AI across all layers of society, and that the effort is not confined to large corporations but includes SMEs and individual users, citing the broad registration response to the summit and the ongoing work on all layers of the Indian context [106-113].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion of India’s sovereign AI stack and national AI infrastructure appears in S25 and S26, outlining the five-layer model and its inclusive intent. [S25][S26]
MAJOR DISCUSSION POINT
Inclusive national AI architecture for India
AGREED WITH
Thomas Zacharia
Argument 2
Start‑ups as AI Natives – Argues that start‑ups, being native to AI, are crucial for improving the AI readiness quotient of SMEs and delivering broad‑based economic growth.
EXPLANATION
Paneerselvam claims that startups, having grown up with AI, can demonstrate value quickly and help raise the AI readiness of small and medium enterprises, thereby driving widespread economic benefits. Their role is positioned as essential for scaling AI across the country.
EVIDENCE
He states that startups have a very critical role to facilitate AI adoption, act as AI natives, demonstrate value, and improve the AI readiness quotient for small and medium enterprises, contributing to broad-based growth across the nation [106-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The critical role of start-ups for SME AI adoption is highlighted in S22, and S28 discusses AI sandboxes that specifically support SMEs and start-ups. [S22][S28]
MAJOR DISCUSSION POINT
Startups driving AI adoption for SMEs
AGREED WITH
Thomas Zacharia, Timothy Robson
Argument 3
Massive public interest, shown by 267,000 registrations, indicates strong demand for AI education and participation.
EXPLANATION
Paneerselvam points out the overwhelming response to the summit as evidence of widespread curiosity and eagerness among Indian citizens, especially youth, to engage with AI.
EVIDENCE
He reports that 267,000 people registered in the last five days, describing the response as unexpected, overwhelming, and a source of pride and excitement for youngsters across India. [110-112]
MAJOR DISCUSSION POINT
High public engagement
Argument 4
Strategic partnership with AMD and the METI Startup Hub will accelerate AI adoption across India.
EXPLANATION
Paneerselvam emphasizes the collaborative relationship with AMD as a key lever for delivering AI capabilities to startups and enterprises, positioning the partnership as central to the nation’s AI roadmap.
EVIDENCE
He thanks the AMD team, expresses looking forward to continued partnership with AMD and the METI Startup Hub, and notes that corporates have a huge role to play in startup success. [113-114]
MAJOR DISCUSSION POINT
AMD‑METI partnership
T
Timothy Robson
6 arguments167 words per minute2753 words986 seconds
Argument 1
Compute Access for Start‑ups – Describes AMD’s Developer Cloud, free GPU hours, Docker containers, and “day‑zero” model support that give start‑ups low‑cost, ready‑to‑run compute resources to move from proof‑of‑concept to production.
EXPLANATION
Tim outlines practical resources AMD provides to startups, including a cloud platform with complimentary GPU time, pre‑packaged Docker images, and immediate support for new AI models. These services lower barriers and enable rapid progression from prototype to market.
EVIDENCE
He details the AMD Developer Cloud offering 50-100 free GPU hours, ready-to-use Docker containers that bundle all required software, and “day-zero” support for new models, allowing startups to test and run models out-of-the-box without extensive setup [187-196].
MAJOR DISCUSSION POINT
Low‑cost compute resources for startups
Argument 2
Open‑Source, Vendor‑Agnostic Tools – Stresses that AI success depends on an open ecosystem (PyTorch, JAX, Triton) and day‑zero support for new models, preventing lock‑in and enabling rapid innovation.
EXPLANATION
Tim argues that open, vendor‑agnostic software stacks are essential for AI development, allowing developers to write code once and run it on any hardware. Day‑zero support ensures new models work immediately on AMD platforms, fostering innovation without vendor lock‑in.
EVIDENCE
He highlights the use of open frameworks such as PyTorch, JAX and the Triton compiler, explaining that they let developers write Python code that runs on any hardware, and notes AMD’s contributions that enable day-zero support for emerging models, thereby avoiding lock-in [210-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of open frameworks and standards to avoid vendor lock-in is discussed in S19, while S13 addresses democratizing AI through open resources. <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S19][S13]
MAJOR DISCUSSION POINT
Open, vendor‑neutral AI software ecosystem
Argument 3
Multilingual LLM development using the Lumi supercomputer to serve low‑resource languages.
EXPLANATION
Tim highlights work with Finland’s Lumi supercomputer to adapt large language models for languages with few speakers, including many Indian languages, demonstrating how AI can be inclusive of linguistic diversity.
EVIDENCE
He explains that Finland’s Lumi supercomputer was used to create LLMs for Finnish (a Uralic language) and that similar methods can be applied to Indian languages with under-5-million speakers, aiming to build an Indian LLM for all languages. [135-144]
MAJOR DISCUSSION POINT
Multilingual LLMs for low‑resource languages
Argument 4
Promotion of Neo clouds and alternative compute providers to offer flexible, cost‑effective AI services.
EXPLANATION
Tim describes Neo clouds as smaller, nimble providers that deliver bare‑metal or managed Kubernetes services, giving enterprises rapid and affordable access to compute beyond the hyperscalers.
EVIDENCE
He notes that Neo clouds are not hyperscalers but provide quick, affordable compute via APIs and token factories, often using bare-metal or managed Kubernetes, and that they are first movers in the market. [176-178]
MAJOR DISCUSSION POINT
Neo clouds for flexible compute
Argument 5
Emphasis on moving from proof‑of‑concept to production, highlighting a clear pathway for startups.
EXPLANATION
Tim stresses that startups need structured support to transition from prototype to marketable product, and that AMD can provide guidance, resources, and validation to ensure technology readiness before large investments.
EVIDENCE
He states that proof-of-concept to product (POC-to-PO) is essential, that startups must understand technology before investing, and that AMD offers hands-on assistance, accelerator cloud access, and industry relationships to facilitate this transition. [184-186]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Support mechanisms that help start-ups transition from prototype to market are described in S22, which emphasizes entrepreneurship and scaling pathways. [S22]
MAJOR DISCUSSION POINT
POC‑to‑production pathway
Argument 6
Day‑zero support for emerging models ensures immediate compatibility and reduces vendor lock‑in.
EXPLANATION
Tim outlines AMD’s practice of providing out‑of‑the‑box support for newly released AI models, guaranteeing they run on AMD hardware without additional engineering, thereby lowering total cost of ownership and avoiding lock‑in.
EVIDENCE
He lists day-zero support for Quen3 Codex, Baidu Paddle, and DeepSeek models, explaining that AMD’s contributions to frameworks like PyTorch enable new models to run immediately on AMD GPUs, offering better TCO and performance without vendor lock-in. [206-221]
MAJOR DISCUSSION POINT
Day‑zero model support
G
Gilles Garcia
4 arguments177 words per minute624 words211 seconds
Argument 1
Edge‑Centric Accelerators – Argues that AI is moving to the far edge (robots, vehicles, industrial plants) and requires low‑power, dedicated accelerators and a full hardware‑software stack, not just traditional GPUs.
EXPLANATION
Gilles points out that many AI workloads now need to run locally on devices with strict latency and power constraints, demanding specialized accelerators and integrated software. This shift calls for a different approach than data‑center GPU‑centric AI.
EVIDENCE
He states that AI is moving into the far edge-robots, vehicles, industrial plants-and that this requires low-power dedicated accelerators and a full hardware-software stack, rather than relying solely on traditional GPUs [230-233].
MAJOR DISCUSSION POINT
Need for specialized edge AI hardware
AGREED WITH
Thomas Zacharia, Timothy Robson
Argument 2
AMD’s Edge AI Portfolio – Showcases AMD‑based physical AI solutions such as the Gene01 humanoid, demonstrating that AI can run locally with high reliability and low latency.
EXPLANATION
Gilles cites the Gene01 humanoid, built on AMD technology, as evidence that AMD’s edge AI portfolio can deliver perception, visualization and actuation directly on the device without cloud dependence. This exemplifies AMD’s capability in physical AI.
EVIDENCE
He references the Gene01 humanoid, the first robot built on AMD technology showcased at CES, which can sense, visualize, touch and act rapidly without relying on centralized cloud resources [239-241].
MAJOR DISCUSSION POINT
AMD’s demonstrable edge AI solutions
Argument 3
Full‑stack hardware‑software integration is essential for edge AI, ensuring reliable, low‑latency operation without cloud dependence.
EXPLANATION
Gilles argues that moving AI to the far edge requires dedicated accelerators combined with a complete software stack, so devices can act instantly and securely without round‑trips to the cloud.
EVIDENCE
He notes that edge AI must operate with low power, high reliability, and without cloud reliance, requiring a full stack of hardware and software; AMD’s portfolio provides such integrated solutions. [231-235]
MAJOR DISCUSSION POINT
Full‑stack edge AI
Argument 4
AMD’s ‘AI anywhere’ philosophy and diverse product portfolio address varied use‑cases from robots to industrial plants.
EXPLANATION
Gilles highlights AMD’s strategy of offering different AI solutions for different contexts, emphasizing that a one‑size‑fits‑all approach does not work and that AMD’s portfolio can support everything from humanoid robots to industrial automation.
EVIDENCE
He cites Lisa Su’s statement that AI is ‘anywhere’, the principle that one size does not fit all, and the Gene01 humanoid built on AMD technology that can sense, visualize, and act locally without cloud dependence. [236-239]
MAJOR DISCUSSION POINT
AI anywhere across use‑cases
AGREED WITH
Thomas Zacharia, Timothy Robson
Agreements
Agreement Points
Open ecosystem and open standards are essential to foster innovation and avoid vendor lock‑in.
Speakers: Thomas Zacharia, Timothy Robson, Gilles Garcia
Commitment to an open ecosystem and open standards to foster innovation. Open‑Source, Vendor‑Agnostic Tools – Stresses that AI success depends on an open ecosystem (PyTorch, JAX, Triton) and day‑zero support for new models, preventing lock‑in. AMD’s ‘AI anywhere’ philosophy and diverse product portfolio address varied use‑cases from robots to industrial plants.
All three speakers stress that openness-both in hardware and software-enables broader participation, rapid innovation and prevents dependence on a single vendor [70-73][124-128][156-158][235-236][239-241].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with the Digital Public Goods agenda that stresses open-source, interoperable standards to prevent vendor lock-in [S55] and echoes analyses that open ecosystems outperform closed platforms over time [S50].
Start‑ups are critical AI natives that accelerate adoption and drive economic growth.
Speakers: Thomas Zacharia, Paneerselvam M, Timothy Robson
Supporting start‑up innovation labs to translate ideas into new companies. Start‑ups as AI Natives – Argues that start‑ups, being native to AI, are crucial for improving the AI readiness quotient of SMEs and delivering broad‑based economic growth. Compute Access for Start‑ups – Describes AMD’s Developer Cloud, free GPU hours, Docker containers, and “day‑zero” model support that give start‑ups low‑cost, ready‑to‑run compute resources.
Thomas highlights innovation labs, Paneer emphasizes startups as AI natives for SME uplift, and Timothy details concrete low-cost compute resources for startups, all underscoring the pivotal role of startups in AI diffusion [69-71][106-108][178-186][187-196].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs highlight the role of startups in AI diffusion, recommending supportive measures such as financing and regulatory sandboxes [S61] and noting their importance in multi-stakeholder innovation ecosystems [S59].
National‑level sovereign AI infrastructure requires public‑private partnership and coordinated investment.
Speakers: Thomas Zacharia, Paneerselvam M
Sovereign AI Infrastructure – Emphasizes the need for a government‑driven, public‑private partnership (U.S. DOE Genesis Initiative) that federates compute, data, and secure cloud‑enabled lab operations to accelerate scientific discovery, energy, and national security. India’s Sovereign AI Model – Highlights India’s five‑layer sovereign AI architecture and the government’s commitment to enable AI across all societal sectors, ensuring that the initiative is not limited to large corporates.
Both speakers describe large-scale, government-led AI programmes that combine public and private resources to build a sovereign AI stack for scientific, energy and societal goals [16-48][106-113].
POLICY CONTEXT (KNOWLEDGE BASE)
Examples include the US American Science Cloud partnership with AMD [S36] and India’s AI Mission public-private compute framework [S37], reflecting a broader policy trend toward shared sovereign AI infrastructure.
AI readiness requires a broader hardware ecosystem beyond GPUs, including low‑power edge accelerators.
Speakers: Thomas Zacharia, Gilles Garcia, Timothy Robson
AI is broader than GPUs – need a holistic AI ecosystem. Edge‑Centric Accelerators – Argues that AI is moving to the far edge (robots, vehicles, industrial plants) and requires low‑power, dedicated accelerators and a full hardware‑software stack, not just traditional GPUs. Open‑Source, Vendor‑Agnostic Tools – Stresses that AI success depends on an open ecosystem and that GPUs are only one part of the solution.
All three note that focusing solely on GPUs is insufficient; a diverse set of accelerators, especially for edge workloads, is needed for future AI deployments [7-10][230-233][156-158].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses criticize GPU-centric approaches and propose heterogeneous compute, including CPU-based and low-power accelerators, to democratise AI access [S39][S53].
Similar Viewpoints
Both emphasize that openness in software and standards is essential for AI progress and to avoid vendor lock‑in [70-73][124-128][156-158].
Speakers: Thomas Zacharia, Timothy Robson
Commitment to an open ecosystem and open standards to foster innovation. Open‑Source, Vendor‑Agnostic Tools – Stresses that AI success depends on an open ecosystem (PyTorch, JAX, Triton) and day‑zero support for new models, preventing lock‑in.
Both advocate for a coordinated national AI strategy that blends public and private resources to build sovereign capabilities [16-48][106-113].
Speakers: Thomas Zacharia, Paneerselvam M
Sovereign AI Infrastructure – Emphasizes the need for a government‑driven, public‑private partnership. India’s Sovereign AI Model – Highlights India’s five‑layer sovereign AI architecture and inclusive government commitment.
Both argue that AI development must go beyond data‑center GPUs to include diverse, low‑power hardware for edge applications [7-10][230-233].
Speakers: Thomas Zacharia, Gilles Garcia
AI is broader than GPUs – need a holistic AI ecosystem. Edge‑Centric Accelerators – Argues that AI is moving to the far edge and requires dedicated low‑power accelerators.
Unexpected Consensus
Need for AI capabilities at the edge, from national data‑center initiatives to low‑power devices.
Speakers: Thomas Zacharia, Gilles Garcia
And I have my colleague Tim from AMD, so we decided that we’re going to tag team. … I’ll focus perhaps a little bit on the sovereign side… (implies broader scope). Edge‑Centric Accelerators – Argues that AI is moving to the far edge (robots, vehicles, industrial plants) and requires low‑power, dedicated accelerators.
Thomas, while primarily discussing national-scale compute, also mentions AMD’s full suite of AI capability from PCs to the edge, aligning with Gilles’s focus on edge-centric accelerators-a convergence of high-level policy and low-level hardware that was not obvious from the outset [11][230-233].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions on heterogeneous compute emphasize edge deployments and energy-efficient hardware to extend AI services beyond data centers [S53][S39].
Overall Assessment

The speakers converge on four main themes: (1) an open, standards‑based ecosystem; (2) the pivotal role of startups as AI natives; (3) the necessity of sovereign, public‑private AI infrastructure; and (4) the requirement for a diversified hardware stack beyond GPUs, especially for edge deployments.

High consensus across technical, policy and economic dimensions, indicating a shared vision that AI readiness depends on openness, inclusive innovation ecosystems, coordinated national strategies, and hardware diversity. This broad alignment strengthens the case for collaborative initiatives that combine government policy, industry resources, and startup agility to accelerate AI adoption.

Differences
Different Viewpoints
Centralized national AI cloud vs decentralized low‑cost compute for startups
Speakers: Thomas Zacharia, Timothy Robson, Gilles Garcia
Sovereign AI Infrastructure – Emphasizes the need for a government-driven, public-private partnership (U.S. DOE Genesis Initiative) that federates compute, data, and secure cloud-enabled lab operations to accelerate scientific discovery, energy, and national security. [16-48] Compute Access for Start-ups – Describes AMD’s Developer Cloud, free GPU hours, Docker containers, and “day-zero” model support that give start-ups low-cost, ready-to-run compute resources to move from proof-of-concept to production. [187-196] Edge-Centric Accelerators – Argues that AI is moving to the far edge (robots, vehicles, industrial plants) and requires low-power, dedicated accelerators and a full hardware-software stack, not just traditional data-center GPUs. [230-235]
Thomas advocates a large, federally funded national AI cloud (American Science Cloud) built on an MI355 cluster to serve strategic scientific and security missions, while Tim promotes a lightweight, cloud-based developer platform offering free GPU hours for startups, and Gilles stresses the need for edge-focused, low-power accelerators rather than centralized data-center resources. The speakers therefore disagree on the optimal scale and deployment model for AI infrastructure. [16-48][187-196][230-235]
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on centralised versus open, decentralized AI ecosystems are documented in analyses of AI platform models, highlighting the long-term advantage of open ecosystems for inclusivity [S50][S51].
Human‑in‑the‑loop governance vs rapid, open‑source deployment without explicit oversight
Speakers: Thomas Zacharia, Timothy Robson
Governance with Human-in-the-Loop – Calls for AI governance that keeps a person in the loop for validation, ensuring safe, responsible deployment of autonomous AI systems. [62-65] Open-Source, Vendor-Agnostic Tools – Stresses that AI success depends on an open ecosystem (PyTorch, JAX, Triton) and day-zero support for new models, preventing lock-in and enabling rapid innovation. [210-218]
Thomas insists that AI systems must always involve human validation before outcomes are acted upon, whereas Tim focuses on providing immediate, open-source toolchains and day-zero model support to accelerate deployment, without emphasizing a mandatory human-in-the-loop step. This reflects a tension between cautious governance and speed-driven openness. [62-65][210-218]
POLICY CONTEXT (KNOWLEDGE BASE)
Scholarly work contrasts formal human-in-the-loop oversight with deeper human agency, noting risks of compliance-only loops and advocating for more substantive governance [S42][S45][S44].
Emphasis on GPUs as core AI hardware vs broader AI ecosystem beyond GPUs
Speakers: Thomas Zacharia, Gilles Garcia
AI is broader than GPUs – Need a holistic AI ecosystem. [7-10] Edge-Centric Accelerators – Argues that AI is moving to the far edge and requires low-power dedicated accelerators, not just traditional GPUs. [230-235]
Thomas points out the over-indexing on GPUs and calls for a full AI stack, while Gilles highlights the need for specialized, non-GPU accelerators for edge applications, suggesting differing views on which hardware should be prioritized in AI strategy. [7-10][230-235]
POLICY CONTEXT (KNOWLEDGE BASE)
Critiques of GPU-centric hardware note supply constraints and propose CPU-centric or heterogeneous solutions as viable alternatives [S39][S40][S53].
Unexpected Differences
Scale of AI investment – massive national exascale projects vs inclusive, small‑scale SME focus
Speakers: Thomas Zacharia, Paneerselvam M
Energy-efficient exascale computing demonstrates sustainable high-performance AI. [86-88] Start-ups as AI Natives – Argues that start-ups, being native to AI, are crucial for improving the AI readiness quotient of SMEs and delivering broad-based economic growth. [106-108]
Thomas highlights ultra-large, energy-efficient exascale systems as the cornerstone of national AI readiness, whereas Paneer stresses building AI capacity through SMEs and startups, suggesting a divergence between focusing on massive flagship projects and grassroots, inclusive development. This contrast was not anticipated given the shared sovereign AI narrative. [86-88][106-108]
POLICY CONTEXT (KNOWLEDGE BASE)
Policy literature contrasts large exascale national programs with calls for democratized, SME-friendly investment models to ensure broader participation [S50][S48][S57].
Overall Assessment

The discussion reveals several points of tension: the appropriate scale and deployment model for AI infrastructure (centralized national clouds vs decentralized startup‑focused compute and edge accelerators), the balance between strict human‑in‑the‑loop governance and rapid open‑source deployment, and differing emphases on hardware priorities (GPUs vs specialized edge accelerators). While participants converge on openness, the importance of startups, and the need for sovereign AI frameworks, they diverge on how best to achieve these goals.

Moderate – disagreements are strategic rather than ideological, focusing on implementation pathways. They suggest that policy makers must reconcile large‑scale national investments with mechanisms that empower startups and edge deployments, and must embed governance safeguards without stifling the speed of innovation.

Partial Agreements
All three speakers stress the importance of openness—whether in hardware standards, software frameworks, or edge solutions—to avoid vendor lock‑in and to enable broad innovation across the AI stack. [70-73][210-218][239-241]
Speakers: Thomas Zacharia, Timothy Robson, Gilles Garcia
Commitment to an open ecosystem and open standards to foster innovation. [70-73] Open-Source, Vendor-Agnostic Tools – Stresses that AI success depends on an open ecosystem (PyTorch, JAX, Triton) and day-zero support for new models. [210-218] AMD’s Edge AI Portfolio – Showcases AMD-based physical AI solutions such as the Gene01 humanoid, demonstrating that AI can run locally with high reliability and low latency. [239-241]
All agree that startups play a pivotal role in scaling AI adoption and that providing them with accessible compute resources and innovation environments is essential. [69-71][106-108][187-196]
Speakers: Thomas Zacharia, Paneerselvam M, Timothy Robson
Supporting start-up innovation labs to translate ideas into new companies. [69-71] Start-ups as AI Natives – Argues that start-ups, being native to AI, are crucial for improving the AI readiness quotient of SMEs and delivering broad-based economic growth. [106-108] Compute Access for Start-ups – Describes AMD’s Developer Cloud, free GPU hours, Docker containers, and “day-zero” model support that give start-ups low-cost, ready-to-run compute resources. [187-196]
Both advocate for a sovereign, government‑led AI framework that integrates public and private resources to serve national priorities. [16-48][106-113]
Speakers: Thomas Zacharia, Paneerselvam M
Sovereign AI Infrastructure – Emphasizes the need for a government-driven, public-private partnership to build national AI capability. [16-48] India’s Sovereign AI Model – Highlights India’s five-layer sovereign AI architecture and the government’s commitment to enable AI across all societal sectors. [106-113]
Takeaways
Key takeaways
Sovereign AI requires government‑driven public‑private partnerships (e.g., US DOE Genesis Initiative, American Science Cloud) to federate compute, data, and secure cloud‑enabled lab operations for scientific discovery, energy, and national security. India is developing a five‑layer sovereign AI architecture that aims to bring AI capabilities to all sectors, including SMEs, through coordinated government effort. Start‑ups, being AI‑native, are critical for raising the AI readiness quotient of small and medium enterprises and for driving broad‑based economic growth. AMD is providing low‑cost, ready‑to‑use compute resources (Helios rack, AMD Developer Cloud, free GPU hours, Docker containers) and “day‑zero” model support to help start‑ups move from proof‑of‑concept to production. An open, vendor‑agnostic software ecosystem (PyTorch, JAX, Triton, open‑source tools) is essential to avoid lock‑in and enable rapid innovation. AI governance must retain a human‑in‑the‑loop for validation to ensure safe, responsible deployment of autonomous AI systems. Physical AI and edge computing are shifting AI workloads to the far edge (robots, vehicles, industrial plants) requiring low‑power dedicated accelerators and a full hardware‑software stack, exemplified by AMD’s Gene01 humanoid and edge AI portfolio. AMD’s exascale achievements demonstrate that ambitious compute goals can be met with efficient power usage, paving the way for future scaling (e.g., Zeta scale).
Resolutions and action items
AMD will continue to supply compute infrastructure (Helios rack, Developer Cloud) and maintain open‑source, day‑zero support for emerging models, especially for Indian language models. A partnership between AMD and the METI Startup Hub was reaffirmed to accelerate AI adoption among Indian start‑ups and SMEs. Commitment to build AI solutions on open standards and open‑source software to enable ecosystem interoperability and avoid vendor lock‑in. AMD will showcase and make available its edge AI portfolio (e.g., Gene01, low‑power accelerators) for developers targeting far‑edge applications.
Unresolved issues
Concrete framework for federating compute, data, and secure cloud operations across national labs, academia, and industry remains undefined. Specific processes and tooling for implementing human‑in‑the‑loop governance at scale were not detailed. Implementation roadmap for India’s five‑layer sovereign AI architecture, including timelines and responsible agencies, was not provided. Funding mechanisms and cost‑sharing models for large‑scale public‑private AI initiatives were not clarified. How to effectively integrate diverse accelerators (GPUs, TPUs, Inferentia, etc.) within the Indian ecosystem was raised but not resolved. Strategies to ensure widespread AI adoption by SMEs, beyond the availability of compute resources, were not fully addressed.
Suggested compromises
Adopt an open ecosystem approach that balances the need for security and governance with the desire to avoid vendor lock‑in. Combine high‑performance exascale compute for research with low‑power edge accelerators to meet both centralized and distributed AI workloads. Leverage public funding for foundational infrastructure while encouraging private sector innovation and start‑up participation as a public‑private partnership model.
Thought Provoking Comments
In AI, there seems to be an over‑indexing of AI and GPUs. When in reality, AI is much broader. GPU is obviously a significant part, but we provide a full suite of AI capability from AI PCs to core infrastructure to the edge.
Challenges the common narrative that AI equals GPUs, expanding the conversation to include software, data, edge devices, and end‑to‑end ecosystems.
Set the thematic foundation for the whole panel, prompting later speakers to discuss not just hardware but software stacks, open ecosystems, and edge deployments. It reframed the discussion from a hardware‑centric view to a holistic AI‑readiness perspective.
Speaker: Thomas Zacharia
The Genesis Initiative – using AI to accelerate scientific discovery, reduce R&D costs, and create a federated, secure, cloud‑enabled lab environment that spans national labs, academia, and industry.
Introduces a concrete government‑driven program that ties AI to national priorities (science, energy, security) and highlights the need for public‑private partnership, data federation, and security‑by‑design.
Created a turning point where the conversation moved from abstract AI readiness to concrete policy and infrastructure models. It prompted Paneerselvam and Timothy to reference sovereign initiatives and public‑private collaborations.
Speaker: Thomas Zacharia
Innovation in AI didn’t happen magically with NVIDIA or AMD. It happened because the US government took the risk to invest in first‑of‑a‑kind systems.
Places government investment at the heart of breakthrough AI hardware, countering the narrative that private sector alone drives progress.
Reinforced the earlier point about sovereign AI and gave credibility to the idea that nations must fund ambitious compute projects. It resonated with later remarks about national labs and the need for sustained R&D funding.
Speaker: Thomas Zacharia
Governance does not mean regulation. It’s about keeping a human in the loop so that AI agents can accelerate innovation while ensuring outcomes are validated.
Distinguishes between regulatory constraints and practical governance mechanisms, introducing the concept of “human‑in‑the‑loop” as a safeguard for autonomous AI systems.
Shifted the tone from purely technical capability to ethical and operational responsibility, prompting Timothy to discuss day‑zero support and open‑source tooling that enable transparent, auditable pipelines.
Speaker: Thomas Zacharia
Start‑ups have a very critical role to facilitate AI readiness because they are AI‑natives; they can improve the readiness quotient for SMEs and ensure the technology spreads beyond large corporates.
Highlights the ecosystem role of startups as catalysts for diffusion, adding a socio‑economic dimension to the technical discussion.
Opened a new thread about how government programs can leverage startups, leading Timothy to describe concrete support mechanisms (developer cloud, Docker containers, accelerator programs) for early‑stage companies.
Speaker: Paneerselvam M
ChatGPT’s launch on 30 Nov 2022 changed everything. Things are moving so fast that the only way to succeed is an open ecosystem.
Marks a clear turning point by pinpointing a recent event that accelerated AI adoption and underscores the urgency of openness for adaptability.
Prompted the panel to focus on software openness, interoperability, and community‑driven standards. It set up Timothy’s later discussion of day‑zero support and open‑source stacks.
Speaker: Timothy Robson
Day‑zero support means a model runs on AMD out of the box, with optimized performance—no lock‑in, just open‑source tools like Primus, PyTorch, Triton that abstract the hardware.
Introduces a tangible benefit for developers and startups, bridging the gap between hardware capability and immediate usability.
Provided a practical illustration of the open‑ecosystem promise, encouraging participants to consider AMD’s developer resources. It also reinforced the earlier governance point by showing transparent, reproducible pipelines.
Speaker: Timothy Robson
Physical AI is moving to the edge—robots, vehicles, industrial networks need dedicated low‑power accelerators that can act without round‑trips to the cloud.
Expands the conversation to edge AI, emphasizing latency, reliability, and power constraints, and introduces a new class of hardware beyond traditional GPUs.
Shifted the discussion from data‑center centric compute to distributed, real‑time AI, prompting Thomas’s closing remark about lightweight edge solutions and reinforcing the need for a diversified hardware portfolio.
Speaker: Gilles Garcia
Stay curious. The future won’t be just thousands of GPUs; it will be a mix of powerful data‑center GPUs and lightweight, low‑power edge accelerators.
Synthesizes the multiple strands of the conversation into a forward‑looking call to action, emphasizing continuous learning and balanced investment.
Served as a concluding turning point that tied together hardware, software, governance, and ecosystem themes, leaving the audience with a clear, motivating takeaway.
Speaker: Thomas Zacharia
Overall Assessment

The discussion was driven by a series of pivotal comments that repeatedly broadened the scope from a narrow GPU‑centric view to a holistic AI‑readiness ecosystem. Thomas Zacharia’s opening remarks and the Genesis Initiative framing anchored the conversation in national‑level strategy, while his points on government‑driven innovation and governance introduced policy and ethical dimensions. Paneerselvam’s emphasis on startups added a socio‑economic layer, and Timothy’s focus on the rapid post‑ChatGPT shift and day‑zero support supplied concrete, actionable examples of an open, developer‑friendly ecosystem. Gilles Garcia’s edge‑AI insight further diversified the technical narrative, prompting a final call from Thomas to stay curious and balance data‑center power with edge efficiency. Collectively, these comments redirected the dialogue multiple times, deepened analysis, and aligned participants around the need for open standards, public‑private collaboration, and inclusive growth across hardware, software, and societal dimensions.

Follow-up Questions
How can compute and data be federated across national labs, academia, and private sector to support sovereign AI initiatives?
Integrating diverse data sources and compute resources is essential for accelerating scientific discovery and ensuring secure, collaborative research across government and industry.
Speaker: Thomas Zacharia
What governance and security mechanisms are needed to enable public‑private partnerships for AI while maintaining confidentiality and national security?
Ensuring secure, confidential computing and governance by design is critical for trust and compliance in sovereign AI deployments.
Speaker: Thomas Zacharia
How can low‑resource and regional languages (e.g., Finnish, Bodo, Konkani, Dogri, Sindhi, Nepali) be incorporated into large language models to create effective Indian LLMs?
Building LLMs that understand all Indian languages is vital for inclusive AI services and aligns with national AI‑for‑all initiatives.
Speaker: Timothy Robson
What are the best approaches to move AI research prototypes into enterprise‑ready tools that employees can use within corporations?
Bridging the gap between research and production ensures that AI innovations translate into real business value and adoption.
Speaker: Timothy Robson
How should organizations evaluate trade‑offs between different Kubernetes services (e.g., hyperscalers vs. Neo clouds) for AI workloads?
Choosing the right cloud/Kubernetes platform impacts performance, cost, and agility for startups and enterprises deploying AI.
Speaker: Timothy Robson
How can AMD provide reliable ‘day‑zero’ support for newly released AI models to guarantee out‑of‑the‑box performance on its hardware?
Day‑zero support reduces integration friction for developers and accelerates adoption of new models on AMD GPUs.
Speaker: Timothy Robson
What hardware and software solutions are needed for physical AI at the edge (robots, autonomous vehicles, industrial systems) that are low‑power, reliable, and do not rely on cloud connectivity?
Edge AI requires specialized accelerators and a full software stack to enable real‑time decision‑making in safety‑critical applications.
Speaker: Gilles Garcia
How can an open ecosystem and open‑source tools be fostered to avoid vendor lock‑in and promote innovation across the AI community?
Open standards enable broader participation, interoperability, and faster advancement of AI technologies.
Speaker: Thomas Zacharia, Timothy Robson
What strategies can be employed to improve the AI readiness quotient for small and medium enterprises (SMEs) and startups in India?
Enhancing AI readiness among SMEs expands the economic impact of AI and ensures widespread adoption beyond large corporates.
Speaker: Paneerselvam M
How can talent development and access to compute resources be aligned to build national AI readiness?
A skilled workforce with adequate compute access is foundational for sustained AI innovation and competitiveness.
Speaker: Thomas Zacharia
What frameworks are needed to ensure human‑in‑the‑loop governance for agentic AI systems to prevent unintended consequences?
Human oversight is essential for safe deployment of autonomous AI agents, especially in critical scientific and security contexts.
Speaker: Thomas Zacharia

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI for agriculture Scaling Intelegence for food and climate resiliance

AI for agriculture Scaling Intelegence for food and climate resiliance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session, convened by Maharashtra’s government, focused on using artificial intelligence to enhance food security and climate resilience in agriculture [8-10]. Chief Minister Devendra Fadnavis warned that climate volatility, water scarcity, soil degradation and fragile supply chains threaten food systems, and argued that AI can deliver hyper-local advisories, credit scoring and traceable supply chains-but only if built on trusted data and ethical governance [42-48][53-55]. He announced the Maha Agri AI Policy 2025-2029, an open, interoperable, ecosystem-driven framework that has already deployed the Mahavistar platform to over 2.5 million farmers with multilingual advisories, pest alerts and scheme access [20-24][61-63]. The policy also includes the Maha AgEx data-exchange architecture to aggregate diverse datasets for predictive governance, such as early-warning alerts for cotton growers [26][65-66].


Dr Devesh Chaturvedi outlined the national Agri-STEC framework and the launch of Bharatvistar, an integrated AI-based service that consolidates farmer IDs, crop surveys, weather, pest and market information on a single app, with plans to expand language coverage and deliver personalized advice [123-130][136-140][148-152]. He emphasized that farmer IDs, akin to a digital UPI, enable seamless verification and delivery of services, reducing bureaucratic “digital red-tapism” and allowing AI to tailor recommendations based on location, crop and soil data [141-146][149-151].


Panelists stressed inclusive design, noting that most women lack land titles and risk exclusion from data-driven services; therefore, early incorporation of women’s data and feedback loops is essential [227-233][236-240]. Dr Soumya Swaminathan added that AI must augment, not replace, human extension services and called for rigorous evaluation, bias checks and “human-in-the-loop” mechanisms to ensure equitable outcomes for women and marginalised farmers [245-251][255-256].


World Bank Vice-President Johannes Jutt described the Bank’s role in financing innovative AI applications, providing credibility checks and just-in-time technical assistance to public and private actors, citing a Moroccan tomato-watering app as an example [184-190][202-205]. He highlighted the need for open standards, interoperable public infrastructure and capacity building to reach farmers with limited digital literacy or basic smartphones [184-193][199-202].


Shankar Maruwada linked the current AI push to historic agricultural breakthroughs, arguing that open, interoperable “digital rails” can diffuse innovations rapidly while preserving inclusion and sustainability [272-279][304-307]. He illustrated how Mahavistar was designed for illiterate users with feature-phone voice interaction in local dialects, demonstrating a collaborative effort among government, academia and industry to create scalable, low-cost AI services [289-294][298-302].


The discussion concluded with a consensus to move from pilots to platform-scale deployments, to strengthen data governance, gender-responsive design and global-south knowledge exchange, and to showcase solutions at the upcoming AI for Agri 2026 conference in Mumbai [100-103][207-210][321-326].


Keypoints


Major discussion points


Scaling AI in agriculture through state-level policy and platforms – Maharashtra has launched the Maha Agri AI Policy 2025-2029 and deployed AI-powered services such as Mahavistar, which already serves over 2.5 million farmers with multilingual advisories, pest alerts and market information, moving the sector from pilots to full-scale projects[20-27][61-66].


Building trustworthy, interoperable digital public infrastructure (DPI) – The dialogue stressed the creation of a unified farmer-ID system, a statewide agriculture data exchange built on open standards, and the integration of AI with this DPI to deliver personalized, consent-driven advisories while avoiding “digital red-tapism”[68-71][124-133][140-148].


Ensuring gender-inclusive AI solutions – Panelists highlighted that women farmers often lack land titles and digital footprints, risking exclusion from AI services. They called for early incorporation of women’s data, design of AI tools that reduce drudgery, and continuous feedback loops with women’s farmer groups to guarantee equitable outcomes[84-86][224-236][245-255].


Global partnership and South-South knowledge exchange – The World Bank, international development funds, and the AI Impact Summit were presented as mechanisms to share financing, technical assistance, and best-practice AI use-cases across the Global South, positioning India’s experience as a model for other developing regions[74-78][170-176][211-218].


Governance, ethics and human-in-the-loop safeguards – Speakers repeatedly emphasized that AI must be built on trusted data, transparent and auditable algorithms, and must retain human oversight to prevent bias, ensure safety and maintain employment for rural communities[54-56][79-82][185-188][245-248].


Overall purpose / goal


The session was designed to move “from vision to implementation” by outlining how AI can be institutionalized within India’s agricultural ecosystem at population scale, while guaranteeing inclusion (especially of women and smallholders), establishing interoperable and trustworthy data foundations, and fostering collaborative partnerships between central and state governments, international agencies, academia and the private sector[112-121][118-122].


Overall tone


The discussion began with a formal, optimistic tone celebrating policy milestones and the potential of AI for food and climate resilience. As the conversation progressed, it became more technical and problem-focused, addressing data fragmentation, digital literacy gaps, and governance challenges. Mid-session the tone shifted toward caution and inclusivity, stressing gender equity, ethical safeguards, and the need for human oversight. The closing remarks returned to an upbeat, collaborative tone, urging collective action and envisioning a future of widespread, responsible AI impact[9-16][52-56][84-86][245-255][311-317].


Speakers


Vikas Chandra Rastogi – Secretary, Ministry of Agriculture and Farmers Welfare, Government of Maharashtra [S1][S2]


Expertise: Agricultural policy, AI integration in agriculture, climate resilience.


Johannes Zutt – Regional Vice President, World Bank [S3][S4]


Expertise: International development finance, agricultural innovation, AI for development.


Dr. Devesh Chaturvedi – Secretary, Ministry of Agriculture and Farmers Welfare [S5][S6]


Expertise: Digital agriculture strategy, AI-enabled extension services, national agricultural policy.


Dr. Soumya Swaminathan – Chairperson, Dr. M.S. Swaminathan Research Foundation [S7][S8]


Expertise: Agricultural research, women’s empowerment in farming, sustainable agriculture, scientific evaluation of technologies.


Shankar Maruwada – Co-Founder and CEO, Agestep Foundation (ECSTEP)


Expertise: Open-source digital public infrastructure, AI ecosystem design, interoperable agricultural platforms.


Devendra Fadnavis – Honorable Chief Minister of Maharashtra [S12][S13]


Expertise: State leadership on AI policy for agriculture, climate-smart farming initiatives.


Additional speakers:


Ramesh Chaturvedi – Secretary, Ministry of Agriculture and Farmers Welfare (mentioned in opening remarks)


Expertise: Agricultural administration, policy implementation.


Full session reportComprehensive analysis and detailed insights

Opening & Theme – Vikas Chandra Rastogi welcomed the participants, introduced the Honourable Chief Minister Devendra Fadnavis and other dignitaries, and framed the session “Using AI for Food and Climate Resilience” as a pivotal moment for Indian agriculture amid climate stress, resource limits and volatile markets [8-13].


Chief Minister’s Vision


– Emphasised AI as essential for food-security, nutrition, farmer incomes and economic stability, warning that climate volatility, falling water tables, soil degradation, fragile supply chains and unpredictable markets are straining food systems [42-47].


– Presented a four-pillar framework for AI in agriculture: (i) transparency, auditability & explainability; (ii) open, interoperable digital infrastructure; (iii) innovation & investment for scaling; (iv) inclusion & gender equity [52-55].


– Announced the Maha Agri AI Policy 2025-2029, an ecosystem-driven, open-interoperable model that has moved from pilots to full-scale projects such as Mahavistar (multilingual, voice-enabled advisories for >2.5 million farmers in Marathi and the tribal language Bili) and AgriStrike (seamless scheme access) [45-48][61-63].


– Described Maha AgEx, a consent-driven, federated data-exchange architecture that aggregates pest, weather, market and soil-health data to enable predictive governance (e.g., early-warning alerts for cotton growers) [26-27][68-71].


– Unveiled a publicly-available traceability DPI blueprint (www.fema.gov) for end-to-end visibility across value chains [71-74].


– Highlighted the global AI-use-case call and the release of the AI-for-Agri 2026 compendium on 17 Feb 2026, showcasing deployments from Africa, Asia and Latin America [74-78][115-116].


– Stressed that Agri-2026 is the International Year of Women in Agriculture and reiterated gender-inclusive design as a core pillar [83-86].


– Invited venture capital, impact investors, multilateral development banks, corporate innovators and philanthropic foundations to partner; announced a partnership with the United States and reaffirmed Maharashtra’s role as a partner of the International Development Fund[86-89][95-96][115-119].


Panel Introduction – Rastogi asked Dr Devesh Chaturvedi how central-state collaboration can align AI deployments with the national architecture while preserving state-level flexibility, and how such collaboration can be institutionalised for population-scale impact [118-122].


Dr Devesh Chaturvedi – National Framework


– Outlined the Agri-STEC framework and the launch of Bharatvistar, an integrated AI platform that consolidates farmer IDs, digital crop surveys, weather, pest, market and scheme information on Android and feature-phone interfaces [123-130][136-138].


– Diagnosed “digital red-tapism” caused by fragmented ministry apps and explained that a single platform will provide a “click-of-a-button” or voice-based experience [131-136].


– Described the farmer-ID (digital UPI) that links land, crops, soil-health cards and scheme eligibility, enabling consent-based personalised advisories within 3-6 months [141-148][149-152].


– Reported successful predictive models tested with 3.8 crore farmers using a century of IMD data, and announced plans to expand weather and market forecasts to improve productivity and reduce input costs [154-158].


– Emphasised AI as a complement-not a replacement-to human extension services [159-162].


Rastogi – Mahavistar Feedback Loop – Confirmed that Mahavistar’s feedback mechanism incorporates user input and noted ongoing collaboration with the M.S. Swaminathan Research Foundation on women-farmers’ rights, bio-happiness and nutritional security [165-168][257-264].


Johannes Jutt – Role of Development Partners


– Re-affirmed the World Bank’s long-standing partnership with India/Maharashtra and the need for agile, just-in-time support to enable experimentation, iteration and responsible scaling of AI solutions [166-168][172-179].


– Outlined government responsibilities: AI governance, interoperability, digital-literacy (including low-literacy and feature-phone users), and ensuring scientifically credible advice [184-188].


– Highlighted private-sector creativity (“a thousand flowers”) and cited the Moroccan tomato-watering app that determines water needs from a simple photo [199-204].


– Described the World Bank’s role in financing, providing foundational AI infrastructure and “truth-testing” AI outputs [204-206].


– Stressed that solving AI challenges in India (multilingual, diverse agro-ecologies) yields spill-over learnings for other developing countries and positioned India as a hub for South-South knowledge exchange[210-218].


Dr Soumya Swaminathan – Gender-Equitable AI


– Noted that most women lack land titles (≈ 25 % have joint or sole ownership according to the latest census) and warned that data-driven services could exclude them unless women’s land-ownership data are captured early [227-230].


– Stressed that AI should reduce women’s drudgery, especially in tribal millet-producing regions, and proposed gender-specific impact indicators [235-238].


– Called for clinical-trial-like evaluation of AI tools, including bias detection, risk assessment and continuous feedback loops [239-247].


– Re-affirmed the human-in-the-loop principle to preserve rural employment and contextual judgement; cited the Fisher-Women app (UN Tech-for-Nature award) as an example where gender-responsive design was essential [241-247].


– Urged inclusion of women farmers on advisory committees for co-design and iterative improvement [250-255].


Shankar Maruwada – Historical Analogy & Architectural Vision


– Compared today’s AI push to the Haber-Bosch breakthrough and the diffusion of synthetic fertilisers in the US and China, arguing that India stands at a similar inflection point [272-289].


– Presented open “digital rails” (e.g., the Beacon protocol) as the backbone for AI services, analogous to India’s railway network [304-307].


– Described Mahavistar’s voice-based design for illiterate users on feature phones, a nine-month co-development effort involving government, academia, the World Bank, Google and others [289-302].


– Advocated a minimum-viable-product approach: launch a basic system and iteratively improve data, models and usage [304-307][310-314].


– Set a vision of 100 diffusion pathways by 2030, each created by diverse stakeholders across continents to achieve safe, scalable AI impact [315-319].


Closing – Rastogi thanked the Chief Minister for his visionary address, reaffirmed the Agriculture Department’s commitment to serving over 15 million Maharashtra farmers, and announced the conclusion of the panel discussion [324-326].


Action Items


– Scale Mahavistar to additional regional and tribal languages and expand voice-based advisory capabilities [24-26][61-63].


– Deploy Maha AgEx as a consent-driven data-exchange to support AI model training [68-71].


– Roll out personalised Bharatvistar advisories within the next 3-6 months [149-152].


– Accelerate saturation of farmer-ID and digital crop-survey databases nationwide [140-148].


– Co-develop traceability DPI modules with the United States and the International Development Fund [71-74][95-96].


– Publish and showcase the AI-use-case compendium at the AI for Agri 2026 conference [115-116].


– Embed women’s land-ownership data and gender-responsive design in AI pipelines; institutionalise “human-in-the-loop” governance and clinical-trial-style evaluation [227-230][239-247][258-260].


– Promote open-protocol “digital rails” (Beacon) to ensure interoperability and trust across public and private AI solutions [304-307].


– Mobilise venture capital, impact investors, multilateral development banks and philanthropic foundations to fund agri-tech startups and capacity-building programmes [86-89][194-199].


Session transcriptComplete transcript of the session
Vikas Chandra Rastogi

Mr. Ramesh Chaturvedi, Secretary of Ministry of Agriculture and Farmers Welfare. Sir, please come onto the stage. Our Honourable Chief Minister, Mr. David Rupadnavi is here. Good morning, sir, and welcome. May I also invite Mr. Johannes Jutt, Regional Vice President, World Bank, onto the stage, please. Honourable Chief Minister of Maharashtra, Mr. Devendra Fadnavis, Honourable Minister. Shri Ashish Elarji, Shri Nitesh Raneji, our distinguished guests from India and around the world. Very good morning. On behalf of the government of Maharashtra, I welcome you to the session on Using AI for Food and Climate Resilience. Agriculture is at a turning point. Climate change is making farming riskier, resources are limited and markets are changing quickly. However, there is an opportunity.

Digital tools and AI are advancing fast. Our goal is not just to use AI tools. We must build intelligence into our public systems to help everyone. For India, the change is essential. It is the key to food and nutrition security, higher farmer incomes and a stable economy. India is a country with a strong economy. India has shown that digital systems work when they are open and well -governed. Our next step is to bring AI into this framework in a responsible way. Under the leadership of Honorable Chief Minister of Maharashtra, the state has launched the Maha Agri AI Policy 2025 -2029. This policy uses AI for pharma advisory services, market information, data exchange, product traceability, innovation and research, and creating capacities of stakeholders.

We are moving beyond pilots to projects at full scale. Mahavistar is the country’s first AI -powered network and information and advisory services. Today, Mahavistar is being used by more than 2 .5 million farmers to get advisories in Marathi language, and recently the first tribal language in the country, Bili, has also been integrated into Mahavistar. AgriStrike is helping to bring AI into the market. It is helping farmers to get seamless access to various schemes and services. the Maha AgEx which is an open federated and consent driven architecture for data exchange it is helping us to bring diverse data sets together to get us a big picture. Agriculture is now a key part of India AI mission. We are proud to work with the government of India to lead this change.

I want to thank the Ministry of Electronics and Information Technology, Ministry of Agriculture Extra Foundation, the World Bank, MS Swaminathan Research Foundation, the Gates Foundation and all our partners for their support. It is now my duty to invite our Honourable Chief Minister to the stage. He will share his vision for using AI to strengthen our food systems and protect our climate. After the address of Honourable Chief Minister, we have a panel discussion with our distinguished panelists. Welcome.

Devendra Fadnavis

A very good morning to all of you. Shri Devesh Chaturvedi ji, Rajesh Agarwal ji, Vikas Rastogi ji. Mr. Jonas Jett, Srimati Swaminathan, Shushankar Maruwada, my colleagues, Shriashi Shailar ji, Nitesh Rane ji. All the dignitaries present here, namaskar and good morning to everyone. It is my privilege to address this distinguished gathering at the India AI Impact Summit. And this important session. On AI in agriculture. We meet at a very defining moment across the world. Food systems are under strain. Climate volatility is intensifying. Water tables are falling. Soil health is deteriorating. Supply chains are fragile. And global markets are unpredictable. For countries from the global south, agriculture is not merely an economic sector. It is livelihood, social stability and national security.

India understands this very deeply. And under the visionary leadership of our Honorable Prime Minister Narendra Modi, India has placed digital public infrastructure and responsible infrastructure at the center stage of national development. The India AI mission is about using technology to deliver inclusion, transparency and scale Today, agriculture must sit at the heart of this mission Over half a billion Indians depend directly or indirectly on agriculture Yet, smallholders face fragmented information, rising input costs, climate uncertainty and limited access to credit and markets Traditional extension systems, however committed, cannot match the scale and the speed required Artificial intelligence changes this equation AI can provide hyperlocalization It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture credit scoring based on crop intelligence, transparent traceable supply chains, real -time market advisories.

But let me emphasize, AI is not a magic. As Honorable PM said in his inaugural session, AI must be built on trusted data, ethical governance, and public accountability. Without trust, scale will not happen. Last year, Maharashtra made a very clear and decisive strategic decision. AI in agriculture must not remain confined to demonstrations or pilots. It must reach millions. Under our Maha Agri AI policy 2025 -2029, we adopted Maha Agri AI policy 2025 -2029, we adopted a policy -led, ecosystem -driven model. built on openness and interoperability. Allow me to share what this has meant in practice. As rightly told by our Secretary Mahavistar, our AI -powered mobile platform delivers multilingual personalized advisories, market intelligence, pest alerts, and access to government services more than 2 .5 million downloads, acting as a digital friend to all these farmers.

This demonstrates one thing very clearly. Farmers are ready for AI when AI is designed for them. AI -based pest surveillance, crop sap integration is our mantra. By integrating, geospatial analytics, With post -surveillance, we have delivered early warnings to cotton -growing farmers, reducing crop vulnerability and finance risk. This is predictive governance in action. Agriculture data exchange is also one thing which is defining this step. We are building a statewide interoperable agriculture data exchange based on open standards and strong data governance. Data must empower farmers, not exploit them. Traceability digital public infrastructure in today’s global markets, the transparency is a mantra. We are unveiling a blueprint. For more information, visit www .fema .gov. a traceability DPI that will ensure end -to -end visibility across value chains enhancing food safety, export competitiveness and consumer trust and this is not proprietary.

It is being designed as a replicable public infrastructure model for India and the entire global south. In partnership with India AI Mission the government of Maharashtra, the World Bank and the Wadhwani AI, we launched a global call for AI use cases in agriculture. The resulting compendium of real world AI applications in agriculture was released in Delhi on 17th February 2026. This compendium documents successful AI deployments from Africa, Asia, Latin America and beyond. India is convening global knowledge for the benefit of the global south. As we move towards AI for Agri -2026 in Mumbai, our vision rests on four pillars. AI must be transparent, auditable and explainable. Open and interoperable digital infrastructure. Innovation cannot scale in silos.

Investment and scaling. Technology without capital remains just a theory. And inclusion and gender equity is also a mantra. Agri -2026. Is the international year of women in agriculture. AI solutions must be designed. with women farmers, not merely for them. Maharashtra today presents one of the most compelling agri -innovation ecosystems globally. 150 lakh hectares of cultivated land, diverse agro -climatic conditions, leading agriculture universities and AI research centres, a vibrant start -up ecosystem, a clear regulatory framework, and single -window facilitation for investors. We invite venture capital funds, impact investors, multilateral development banks, corporate innovation arms, and philanthropic foundations to partner with us. And in this partnership, we initiate a global partnership between Maharashtra and the United States to develop and leverage the technology to create a future for all.

Maharashtra is a partner of the International Development Fund. Maharashtra is a partner of the International Development Fund. Maharashtra is a partner of the International Development Fund. Maharashtra is a partner of the International Development Fund. Maharashtra is a partner of the International Development Fund. Maharashtra is a partner of the International Development Fund. Maharashtra is a partner of the International Development Fund. co -developing traceability DPI modules, investing in agri -tech startups, supporting digital literacy, especially among women farmers, building capacity in the rural AI ecosystems. When you invest in Maharashtra, you invest in scalable solutions for engaging economies worldwide, food security, climate resilience and AI governance are deeply connected. Countries that master AI -enabled agriculture will secure farmer incomes and strategic stability.

India has the scale, DPI and democratic governance model to demonstrate how AI can be deployed responsibly at population scale. Maharashtra is proud to be laboratory of that ambition. Friends, this satellite session is a declaration. We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution, from intention to investment. The government of Maharashtra stands ready to collaborate with the government of India, with states, with global institutions, investors, researchers and farmer organizations. Let us ensure that AI becomes a force for

Vikas Chandra Rastogi

Thank you. Thank you, sir, for your visionary address. You always continue to inspire us to aim higher and achieve better. And under your leadership, I can assure you the Agriculture Department will rise to the challenge and serve the aspirations of more than 15 million farmers of the state of Maharashtra. Thank you so much, sir. We will now start the panel discussion in a few moments. Thank you. Thank you. Thank you. Thank you. Once again. Dr. Devesh Chaturvedi, he is the Secretary, Ministry of Agriculture and Farmer Welfare Dr. Chaturvedi leads our national effort in agriculture and farmers welfare Mr. Johannes Jett, he is the Regional Vice President, World Bank Mr. Jett brings a vital global perspective on development and finance from the World Bank Ms.

Soumya Swaminathan, she is the Chairperson of Dr. M. S. Swaminathan Research Foundation Dr. Swaminathan is a global leader in science, a champion for sustainable research and a strong advocate for mainstreaming women farmers’ role in agriculture Mr. Shankar Maruwala, he is the Co -Founder and CEO of Agestep Foundation He is a pioneer in building digital public infrastructure that empowers women farmers to develop their own agriculture and empowers people at scale and I am very proud to say that the Government of Maharashtra and Agestep Foundation together have brought out Mahavistar, which more than 2 .5 million farmers are using today to get the advisories and information that they need on a daily basis. The objective of this panel discussion is to move from vision to implementation.

Specifically, we will deliberate on how to institutionalize AI within agriculture systems at scale, how to ensure inclusion, especially of women farmers and smallholders, how to build interoperable, trustworthy and sustainable AI governance ecosystems, and how to strengthen collaboration between the center, states, global institutions, industry and academia. The session is also an important precursor to AI for Agri 2026 Global Conference, where we will continue these deliberations in greater operational depth with governments, investors, investors. innovators and development partners. AI for Agri conference is being held in Mumbai on 22nd and 23rd of February at Jio World Convention Centre. With this context, let’s begin our discussion. My first question is to Dr. Devesh Chaturvedi. Sir, under your leadership, the ministry has taken significant steps in advancing the digital agriculture mission and operationalizing the Agri -STEC framework.

You are laying a strong digital foundation for the sector. As we now look at integrating AI more systematically into agriculture, how do you envision the central state collaboration framework, specifically to ensure that AI deployments are aligned with national architecture while allowing states the flexibility to innovate based on local agro -climatic and socio -economic context? And finally, how can we institutionalize this collaboration? to achieve population scale impact while mentoring interoperability and data trust. Thank you.

Dr. Devesh Chaturvedi

A lot of questions in the same question. So what I’ll do is I’ll just first take you through the initiatives. First of all, we deeply appreciate the leadership taken by Maharashtra under obviously the leadership of a vulnerable chief minister and with the agriculture department. They have done exceptional work in digital agriculture mission by developing farmer IDs and digital crop survey. And also they launched Mahavistar as a precursor of Bharatvistar. And recently on 17th, government of India have also launched one of the first integrated AI -based system for the farmers, which is Bharatvistar, which presently is undertaking, providing services, which is work through the app. Android based app as well as through mobile telephony on weather advisories ICR based crop advisories, pest advisories market information regarding various agriculture produced, traded in the Mondays and lastly the government schemes of government of India.

Now why is this important, AI is important in agriculture? Like we did a lot of, we started with digitalization of services, different services we had DBT we had online systems of applying for various common person, applying to the common service centers or through the mobile apps but what was felt was that while we had initiated this process to ensure that the bureaucratic red tapism is removed, what we were moving towards was a sort of digital red tapism because within our ministry different schemes had different apps and they had different ways of selection and within the state also horticulture had a different database of farmers, agriculture had a different database, animal husbandry has a different database, crop insurance has a different database.

So basically a farmer who has to avail so many services was, we felt that he or she was getting lost in which app to use and which one to use. And sometimes it becomes more difficult to avail the services through online systems or to get advisories than to go to a person and say, okay, tell me how to do it. So the whole idea was that once we have this AI -based system, we have a same platform for different applications and different advisories at a click of the button or maybe just as a voice. So that is the whole idea of shifting towards AI -based solutions. So now what we have initially in the first phase in the artificial intelligence system, the Bharat Vistar or the Mahavistar of Maharashtra, is that the advisories, the crop advisories, the weather advisories, schemes information about how to apply and what is the status of that application, and also the Monday rates.

All these have been put in the one platform. You can just make a – presently it is working in English and Hindi, but in the next three to six months we’ll be taking it towards all the Bhashani -related languages. And the next step is, as we mentioned, that the states are working together with us for the digital public infrastructure. So close to 9 crore farmer IDs have been developed. So what is a farmer ID? And you must have read the statement of Honourable Finance Minister, that DPI is the new UPI. So what is the basic – this agri -stack, which is a part of DPI, is that for agriculture is that we have – each farmer has a unique farmer ID with the back end of all the crops the person has sown, what is the land available to that person, all the data with the share of the land and the crops sown and the soil health card details if the soil health has been given.

So with these basic details available on the system, it empowers the farmer through that ID to avail services because it is already approved by the relevant authorities in the government. So the person does not have to or the authorities who are giving the services are not required to cross -verify the credentials of the farmer based on the record of rights or maybe the Girdhavari or whatever it was in the different states. So every state and Maharashtra is one of the leading states here. We are working together to have a saturation of farmer IDs and crop survey. And once this is there, then this AI will further transform into a very, very tailored advisory. So a person calls or gives the farmer ID or Aadhaar.

And at the back end, we will, based on the consent, access the details of where the farmer is from, what is the crop being grown, what is soil health conditions. And very targeted advisories will be given, which will be made operational in the next three to six months. So instead of pushing data which may not be of interest of the farmers, very specific, tailored, data for that farmer will be available based on integration. of digital public infrastructure with Bharat Vistar. And the third aspect will come when we do the predictive models. And we tried that and you must have remembered in the inaugural session when Google CEO mentioned about that predictive model which we did with about 3 .8 crore farmers.

We used 100 years data of IMD and a model to predict a monsoon for the next one month and for next week. And that prediction was fairly accurate and farmers we got the feedback to farmers did take the decision to sow and to irrigate based on the predictive model which was sent. And now we will expand the predictive models to ensure that we get more advisories of the market situation, of the weather situation which will help improving the decision making of the farmers and so that they can increase their productivity, reduce the cost. So that is the whole idea of AI in agriculture. And we hope that more and more farmers will adopt it and it will be a lot and it will be a lot exactly a replacement but a sort of additional to the human, we can say, extension services, which we find is not able to reach to the farmers because of the resource constraints of each state.

The extension machinery, the KVKs, all our state extension machineries, it’s very difficult to reach each and every farmer because of the fact that we can’t have a person sitting in each village reaching to each farmer. But AI, along with digital public infrastructure, along with the mobile and internet penetration in the various rural areas, will ensure that that gap is removed and we get more and more access to the farmers on services and advisors. That is the whole idea of having center and state interoperability. But I hope I have answered most of the questions which

Vikas Chandra Rastogi

As you rightly mentioned. AI systems are acting like a digital friend of the farmer so they are available at any point of time through multiple channels and in a language they understand in FEDSAR with ministries assistance we were able to get access to multiple images of pest and disease and with IIT Bombay we have been able to develop models where farmers can take a picture and they can find out what pest and disease is it and then ultimately what is to be done based on the knowledge created by agriculture universities and ICR institutions so I think there is a great opportunity for us the national government has the scale and the states have their own specific skill sets and knowledge together if they combine I think we can reach out to each and everybody in the farming sector.

Thank you sir I will move on to Mr. Johannes Jutt now the regional vice president of the World Bank the World Bank has been a long standing partner to both the government of India and the government of Maharashtra we have multiple projects going on concurrently as well as we have had in past as well. And these projects have been aimed at strengthening agriculture systems, climate resilience, and institutional capacity. As we move into the era where AI technologies are evolving at unprecedented speed, how can development partnership adapt to remain agile and responsive? In particular, how can we structure programs and technical assistance model that provide just -in -time support to central and state governments, enabling them to experiment, iterate, and scale AI solutions responsibly?

Johannes Zutt

to be here today. So we’re on the cusp of a major revolution in how support to farmers and agriculture is happening. I actually grew up on a farm. I worked on a farm from the ages 10 to 21. I think every hour I wasn’t in school that I was actually at home. I was working in a farm. In some ways it feels paleolithic because we didn’t have computers. We had telephones that were connected to wires and our ability to get information about what was happening around us was extremely limited. We spent a lot of time trying to find out the things that today you can find out very, very quickly using small AI for agriculture. And that’s truly revolutionarily empowering for farmers.

But to make that work for farmers, there’s a lot of things that need to go right. And I think it’s worth reflecting a little bit about on the different roles that we have. Thank you. actors in the ecosystem have, starting obviously with government. My colleague mentioned a number of these things earlier. The government’s responsibility is principally on foundations, things like the governance of AI, the interoperability, accessibility, obviously ensuring that educational programs include appropriate types of skilling in the use of digital services. This is a big challenge in countries like India, where frankly there are still people who don’t have sufficient literacy to read what comes over a basic smartphone. Ensuring that the research and extension that is provided through these small AI platforms, is credible, is trustworthy, is backed by science.

I think that’s also extremely important. Of course, farmers will find out if they aren’t. but at high expense, right? So we want to make sure that they’re not being advised to do things that are negative for them. And then also looking at the cost of service, the connectivity, what does the farmer actually need to be able to link into these different types of platforms that give information? Because, of course, we’re often also talking about farmers who have very, very few assets and who may be essentially unable to stay permanently connected or even easily connected to the Internet. They’re going to have very basic smartphones, et cetera. So the government has a lot of work to do in all of those areas.

Then you can look at what can the private sector do. Now, one thing that the government needs to do is encourage, crowd in private sector capacity and capital. But once we turn to the private sector, what is the private sector’s principle? advantage. I think that there’s a lot of creativity in the private sector. So the actual applications that are being developed are being developed by individuals in the private sector with a passion for specific sorts of issues that are constraining farmer success. And, you know, that creativity will result in a number of different applications that will be aimed, in most cases, to help farmers overcome certain hurdles that they face. And, you know, we can kind of let a thousand flowers bloom there and see what actually takes root.

And it’s amazing what you start to see. Just yesterday, I was learning about an application in Morocco developed by a tomato farmer who was able to give advice about how much water tomato plants need simply by taking a picture. of the current tomato plant. Take a picture and it tells you how much water you actually need to give this plant, which obviously in a water -stressed environment is vital, vital information. And then, you know, there are roles for institutions like my own, the World Bank Group, which can help to provide some of the financing that helps develop these applications, and also the foundational backbone for artificial intelligence. And we can also play a role at the advisory end, where we are helping to truth -test, if you like, the information that’s coming through different applications that are coming out of the AI sandbox in different contexts to make sure that it’s actually providing information that’s useful to the end beneficiary and enhancing from a productivity perspective at the farm level.

Thanks.

Vikas Chandra Rastogi

I think you have rightly pointed out the role of innovation and research and what we see is we require high quality robust data to actually build upon that and as Honourable Chief Minister mentioned, MahaEGX is one step in that direction wherein we bring diverse data sets and make them accessible to researchers, academic institutions, departments and also start -ups and many of these start -ups we will see they are showcasing their innovations in AI for Agri conference in Mumbai. So we request all of you to please come there and see for themselves what kind of excitement they have and what kind of solutions are envisaged. I have one supplementary question to you. How do you see a platform such as AI Impact Summit as well as AI for Agri global conference contributing to deeper global collaboration and south -south knowledge exchange in this domain?

Johannes Zutt

Thank you for that additional question. I mean, obviously, India is in a great position to lead the development of AI, particularly for developing countries where there are still significant challenges helping poor people to escape poverty permanently. India has demonstrated digital innovation for a long period of time already. It’s got an enormous population with a huge variety. The challenges of bringing farmer -appropriate data to the farmer’s fingertips in India are – I was going to say India is a microcosm of the rest of the world. It’s hardly a microcosm. It’s so huge. But because you have so many languages, so many different regions, so many different types of crops, and the starting conditions at the farm level are so incredibly varied, figuring out how to make AI at the farm level work, in India will automatically have a large number of spillover learnings for other countries around the world.

and because India after China and the United States is the country in the world that is best positioned actually to push all of this work forward and because it is itself a developing country, it’s very, very clear that it will have a central role to play in South -South learning for those reasons.

Vikas Chandra Rastogi

Thank you so much. I move on to Dr. Swaminathan. Dr. Swaminathan, your father, Professor M .S. Swaminathan, played a historic role in shaping India’s agriculture transformation during the Green Revolution, ensuring food security at a critical juncture in our history. Today, as we speak of a new phase of transformation driven by AI, we are again at an inflection point. You have consistently championed science -based policy, sustainability and the empowerment of women farmers. With 2026 being recognized internationally, as the year of women farmers, how can we ensure that AI -led agriculture transformation strengthens women’s agency? knowledge access and climate resilience and what institutional safeguards and design principles must be embed today so that this new technological revolution becomes equitable farmer centric and grounded in scientific integrity

Dr. Soumya Swaminathan

Thank you very much for that question Vikasji not only is this year the international year of women farmer but we know that agriculture itself is increasingly being feminized with many men actually leaving farming to the women and migrating out to the cities for other opportunities so it is really essential to put women at the center of all that we are discussing and I think the chief minister today gave us a wonderful vision of what can be the future provided of course like you said that there are the guardrails there are the institutions there are the safeguards and the design principles that we think about from the very beginning so my father professor MS Parminathan used to say that the green revolution was not only about the seeds, of course the seeds played a very big role you know the high yielding varieties but it was about the entire ecosystem and the institutions that were developed at that time which included the outreach you know later on the Krishi Vigyan Kendras of course were developed but also the access to credit, the water, the fertilizers, the education, the empowerment and ultimately became a success because farmers realized the potential of it and took it on.

So what he used to say is that you know every technology, no technology is pro -poor or pro -rich or pro -woman or against women, it’s how we use that technology so it’s really like you said the inflection point today is how do we use this very powerful technology that’s come to us. So I think there are a few points here, you know, to make sure particularly that women farmers are not left behind. The first important fact is that women in India, the minority of them who have their name on the land document, so mostly it is in the man’s name, and Deveshji was telling me today that this is improving and that the latest census shows that perhaps at least a quarter of the properties are also in the name of women, either jointly or, but that still means that, you know, three -fourths of them don’t have.

And a system that operates basically on publicly available data will then leave out those whose data sets are not available. So I think it would be really important at the early stages itself to think about how women’s data can be incorporated because the algorithms are fed by the data we have. And so all of these advisories may be very suitable for a man who’s operating a tractor on a farm, but not at all relevant for a woman who’s still working with outdated instruments and trying to, you know, till her land. And particularly when we look at more remote areas, tribal areas, where women do a lot of the agriculture like millets, for example. Mostly it is women who grow millets.

And there’s still a lot of mechanization which is absent completely. It is all still very much done using traditional methods and tools, and it involves a lot of drudgery. So I would say that, you know, one of the benchmarks that I would look at is, is it reducing the drudgery and the workload on women farmers? Is AI helping to do that? So I think we also need to look at certain indicators for success. And you mentioned science. I mean, I’m a medical researcher, and the way that we evaluate products is by doing clinical trials, by examining the data and the evidence. And then recommending it for wider use. so again a note of caution would be to as we roll it out we need innovation certainly we also need to do the evaluation looking at inherent biases looking at who’s being excluded looking at are there unanticipated risks or side effects that we didn’t know about but most of all it’s this inclusion I think we don’t want those who are already left behind to be further left out so I think the ongoing research and data collection and feedback loops and most importantly having the voices of those for whom we are developing all these I think in the room I don’t think we have any farmers or women farmers so we are all discussing from what we know but if you are the farmer like you were saying working there and you know the constraints under which you are working so I think the women farmers and farmers in general must have a role they must be part of these committees that evaluate or make recommendations or make suggestions on improvement it has to be an iterative process I think any technology is as good as the application for which it’s developed I’ll give you one example of an app that the MS Farminathan Research Foundation developed for fisher women.

We had very successful app for fisher men called the Fisher Friendly Mobile App that won the UN Tech for Nature award last year. But fisher women were as usual left out and so the Women Connect app actually gives them on a tablet information that they need to sell because once the fishermen have come back from seeds, the women who have to do all of the post harvest and the same is true for crops or fruits or vegetables as well. So that connection to the market, of course information about pests and pathogens and when to buy what and what inputs to use but also being able to organize themselves. And I think women, there are many FPOs now and FPCs and SHGs made of women farmers, empowering them and giving them the knowledge and tools.

And the last thing I would say is we still need humans in the loop. I don’t think we should think that completely making everything run by machines is going to solve our problems. I think it’s risky there. And in a country like India, we also need employment. And so we should think of, and I don’t know how many of you have seen this film called Humans in the Loop. But it’s a tribal woman from Jharkhand who actually raises questions about the algorithm. It’s a very thought -provoking film. So I think Humans in the Loop is going to be important. We have our Krishis, Sakhis and so on. We need to empower them with these. So I think AI and all these digital tools, if they’re used in addition to the traditional knowledge and wisdom that people have and augment it and give them at the right time, at the right place, the knowledge they need, I think we can go a very long way.

Thank you.

Vikas Chandra Rastogi

Thank you, madam. You have rightly pointed out the need to be more sensitive and while developing systems for inclusivity. and to ensure that for whom they are being developed and they are in the loop and they are being consulted. In fact, the feedback mechanism that we have developed in Mahavista takes care of those requirements. I’m also very happy to share that Government of Maharashtra and Dr. M .S. Swaminathan Research Foundation are working together on some of these issues in terms of how to bring women’s right in farming at the center stage. How do we create bio -happiness using our universities and educational systems? And what kind of nutritional security we must look for? Because we have food security, but it’s the nutritional security that we must aspire for.

So we are happy to have support and assistance from MSSRF in that direction. My final question is to Mr. Shankar Maruwada. Mr. Shankar, ECSTEP has played a role. A foundational role in shaping India’s DPI landscape through open source platforms such as Sunbird. which has powered large -scale systems like Diksha, Mahavistar and Open Network Initiative built on backend protocol. These efforts have demonstrated how open standards and interoperable architecture can enable population -scale transformation that we are already seeing today. As we now enter the era of AI -driven public systems, how should we think about standardizing AI -based ecosystem in a similar spirit? How can we bring DPI into AI? And what architecture and governance principles are required to ensure interoperability, trust and sustainability in AI deployments across sectors such as agriculture?

Shankar Maruwada

Again, a whole lot of questions, but let me make my best attempt to answer those. More than 100 years ago, the world faced what was known as a Malthusian crisis, where Malthus, the economist, predicted that if we continue to grow, in the same way we’ll run out of land, we’ll run out of soil. We were a billion and a half then. We are 8 billion. Most of us may not even have heard of the Malthusian crisis. What happened? Someone called Haber and someone called Bosch created a miracle. Haber synthesized ammonia using high pressure and temperature and Bosch put it into an industrial process. That phenomena is now historically known as pulling bread out of air. It took a lot of effort and as Soumya said, creation of a massive ecosystem.

Germany, which pioneered this, lost that race to US. Because US did a better job of diffusing the technology safely to the farmers. They created the discipline of agriculture engineering. They created institutions like the Fertilizer Development Center. They helped. technology demonstrations to farmers to show them how synthetic ammonia could be used. By the way, 50 % of the nitrogen in our body comes from synthetic nitrate ammonia. That’s a fact. So we owe a lot to Haber and Bosch. China then took it on in 80s by buying 10 big plants from Kellogg, training 300 million farmers, showing them how to use synthetic fertilizers. And they went on to be the global leaders in agriculture. India is at a point where if we learn the lessons from such past things, our green revolution, our DPI experience, we are at a pivotal point where the equivalent of pulling bread out of thin air is pulling intelligence from the earth and providing it to the farmer.

this is again not science fiction Mahavistar, the pioneer along with Bharatvistar have taken the first steps to this so when a Mahavistar was designed to build off what Swami has said, it was designed with inclusion in mind inclusion diversity was not an afterthought because to solve for not just Maharashtra’s problems for India’s scale and diversity we need to think of the last person, the most discriminated in the remotest part of India and design systems that work for them we call that DPI now let me give you a specific example of this in Bharatvistar right from the beginning the design specs was we need an illiterate farmer to build off John’s point about digital literacy with a feature phone, not a smartphone, to be able to talk in his or her native language and native dialect.

Marathi itself has many dialects, right? Talk on the phone, like the way she is comfortable talking to another person. Ask the question, have a conversation, get a bunch of answers. That process took us the better part of nine months. Why? Because it’s not just AI. It’s data. It’s processes. It’s training the farm extension workers. It is having trust on will this work? What about the costing? Will I blow up my entire stage budget on a model, right? Do I have autonomy? Can I switch models out, in and out? These are very, very difficult questions. It took us in partnership with a whole lot of people. I mean, government of Maharashtra led the effort, but IndiAI mission, Bhashini, IIT Madras, IIIT Hyderabad, World Bank, Google many other providers everybody chipped in the little part of the solution now here is the best part because we all collaboratively invested in figuring out a solution there that solution could be deployed in Bharat Vistar with more confidence easily again the same challenges that secretary Chaturvedi talked about do we have the data he used a very nice phrase digital red tapism our data is in different formats what matters is the intent of the government of India which triggered the process which allowed Bharat Vistar to be launched the day before it’s a start data will get better, the systems will get better, usage will improve that will generate more data and then over time years the ecosystem will be built this we know from our experience what makes this happen what is that secret sauce the design principles it is the same as DPI what worked for DPI we are taking those same principles one open interoperable systems think networks and not just portals and platforms and siloed and fragmented systems what’s the best example of this the railways in India we have such a vast landscape but the rails are common every state can decide what it wants to move private public defense farming the Indian railways is just providing a backbone that allows everyone to do this there was a time when we had different rail gauges right now that sounds so silly but there was a time like that But India is showing that we don’t have to repeat those early mistakes in digital also.

By creating interoperable networks based on open protocols like Beacon, by collaborating with each other, one of us is bringing in data, somebody is bringing in technology, somebody is bringing in policy, somebody is bringing in research. These collaborative open networks and with the launch of Bharat Vistar puts India in a very unique and responsible position. Unique because we have these open rails, we have the experience of DPI. Responsible because it is a start. Unlike the technologies of the past where you perfect the technology and then deploy AI, you deploy something minimum to start and then evolution, models get better, data gets better, usage gets better. And then it gets better and better over time. that is the unique junction we are in in India what will that mean?

when I -CAR plugs into this network with its weather and pricing data that network makes it available to any state that wishes to turn on the supply from I -CAR when a private sector comes out with a very innovative app let’s say the tomato example that John talked about any state can say I like that I think I will have that made available to my farmers for the farmers they anyway trust the state they can go to the same app and now see this also there if the tomato app person wants they can go directly to each farmer very very expensive so Shared Rails allows us to spread innovation diffuse it very quickly through society keeping in mind both inclusion and inclusion and inclusion and rewarding innovation because innovation has to be rewarded.

And I want to end with a very simple analogy. When Edmund Hillary climbed Mount Everest, he made a lot of people believe it is possible. When Mahavistar was launched, it made the country believe that it is possible to make AI serve the farmer. And to that extent, the responsibility that Mahavistar, Maharashtra government and government of India has is to create these pathways for the rest of the country for the other states. At XTEP Foundation, Nandan Elekani, we made a declaration two days ago. We would like to see a world by 2030 where there are 100 such diffusion pathways, each created by a different set of people in different sectors, in different countries and continents, but each inspiring.

different AI pathways to safe impact at scale. And it’s a very exciting vision. It’s a very collaborative vision. If you all get together, we can also create miracles in our own lifetime. Thank you.

Vikas Chandra Rastogi

With that profound thought, we’ll conclude today’s panel discussion. I thank all the panelists. They have really opened a new vision in front of all of us. And we’ll invite all of you to AI for Agree conference in Mumbai on 22nd. Thank you so much. We don’t have question actually. Time for question. The next session is about to start.

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The four‑pillar framework for AI in agriculture includes transparency, auditability & explainability; open, interoperable digital infrastructure; innovation & investment for scaling; inclusion & gender equity.”

The discussion of transparency, auditability and open architecture as essential adoption accelerators matches the first two pillars described in the report [S73] and the emphasis on trust infrastructure aligns with the fourth pillar [S74].

Confirmedhigh

“Maha AgEx provides early‑warning alerts for cotton growers through a federated data‑exchange architecture.”

AI-based pest surveillance and geospatial analytics have already delivered early warnings to cotton-growing farmers, as noted in the source [S5].

Additional Contextmedium

“A publicly‑available traceability DPI blueprint (www.fema.gov) will give end‑to‑end visibility across value chains.”

Digital public infrastructure can raise exclusion risks for marginalized users, a nuance highlighted in the analysis of DPI challenges [S58]; additionally, the FarmerZone open-source data platform exemplifies publicly-owned agricultural data systems that support traceability [S75].

Confirmedmedium

“Trust and institutional safeguards (transparency, auditability, explainability) are critical for scaling AI in food systems.”

The importance of trust infrastructure, alongside transparency and auditability, is emphasized as a prerequisite for scaling AI in climate-resilient food systems [S74].

External Sources (81)
S1
AI for agriculture Scaling Intelegence for food and climate resiliance — -Vikas Chandra Rastogi: Secretary of Ministry of Agriculture and Farmers Welfare, Government of Maharashtra – leads the …
S2
AI Meets Agriculture Building Food Security and Climate Resilien — -Vikas Chandra Rastogi- Secretary, Ministry of Agriculture and Farmers’ Welfare, Government of Maharashtra (moderator/ho…
S3
AI Meets Agriculture Building Food Security and Climate Resilien — -Johannes Zutt- Regional Vice President, World Bank
S4
How AI Drives Innovation and Economic Growth — -Johannes Zutt: World Bank representative (referred to as “John” in the discussion)
S5
https://dig.watch/event/india-ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — May I invite Dr. Devish Chaturvedi, Secretary, Ministry of Agriculture and Farmers’ Welfare. Sir, please come onto the s…
S6
AI for agriculture Scaling Intelegence for food and climate resiliance — A very good morning to all of you. Shri Devesh Chaturvedi ji, Rajesh Agarwal ji, Vikas Rastogi ji. Mr. Jonas Jett, Srima…
S7
AI Meets Agriculture Building Food Security and Climate Resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S8
AI for agriculture Scaling Intelegence for food and climate resiliance — -Dr. Soumya Swaminathan: Chairperson of Dr. M.S. Swaminathan Research Foundation – global leader in science, champion fo…
S9
https://dig.watch/event/india-ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S10
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-agriculture-scaling-intelegence-for-food-and-climate-resiliance — So we are happy to have support and assistance from MSSRF in that direction. My final question is to Mr. Shankar Maruwad…
S11
AI Meets Agriculture Building Food Security and Climate Resilien — – Dr. Soumya Swaminathan- Shankar Maruwada Dr. Swaminathan advocates for a cautious, medical research-style evaluation …
S12
AI Meets Agriculture Building Food Security and Climate Resilien — -Devendra Fadnavis- Honorable Chief Minister of Maharashtra
S13
AI for agriculture Scaling Intelegence for food and climate resiliance — – Devendra Fadnavis- Dr. Soumya Swaminathan
S14
Harnessing Collective AI for India’s Social and Economic Development — Artificial intelligence | Human rights and the ethical dimensions of the information society | Data governance Professo…
S15
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S16
Building Inclusive Societies with AI — When asked about government initiatives, Manisha Verma, Additional Chief Secretary of Maharashtra’s SEED Department, out…
S17
Global Perspectives on Openness and Trust in AI — “It was this project that brought together over a thousand researchers … to try and create an open source large langua…
S18
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — 1 ,000 hectares in some big island of Indonesia in order to get the safe efficiency in the next five years. And then we …
S19
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S20
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Audience:Thank you very much for all the sharing. It’s really interesting. So I have a bit of a specific question. So it…
S21
WS #98 Universal Principles Local Realities Multistakeholder Pathways for DPI — Rasmus Lumi: Thank you very much. Well, maybe I should start by saying that when in the beginning, when you introduced m…
S22
DPI+H – health for all through digital public infrastructure — An insightful observation was made that the private sector can viably contribute to DPI components within a secure frame…
S23
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — An audience member emphasized the importance of thorough research in policy formulation. This point resonated with the p…
S24
Open Forum #37 Her Data,Her Policies:Towards a Gender Inclusive Data Future — Bonnita Nyamwire: Thank you so much, Christelle. So a gender-inclusive data is one that is representative of all genders…
S25
Building Indias Digital and Industrial Future with AI — “India, surely for the vast amount of experience and scale and heterogeneity that it has, offers excellent evidence on w…
S26
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Cina Lawson: Thank you very much, so the first comment I make is that AI has to work for us. It means that we have to ma…
S27
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S28
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclu…
S29
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S30
9821st meeting — At the heart of the development and use of artificial intelligence systems, human beings and their dignity must always b…
S31
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S32
Harnessing Collective AI for India’s Social and Economic Development — Artificial intelligence | Human rights and the ethical dimensions of the information society | Data governance Professo…
S33
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclu…
S34
AI as critical infrastructure for continuity in public services — Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational an…
S35
AI Meets Agriculture Building Food Security and Climate Resilien — The World Bank’s Johannes Zutt stressed the importance of collaborative ecosystems where government provides foundationa…
S36
Fostering Global Digital Cooperation for Prosperity — Dima Al-Khatib, Director of UN Office of South-South Cooperation, highlighted South-South and Triangular Cooperation as …
S37
AI for agriculture Scaling Intelegence for food and climate resiliance — A very good morning to all of you. Shri Devesh Chaturvedi ji, Rajesh Agarwal ji, Vikas Rastogi ji. Mr. Jonas Jett, Srima…
S38
The fading of human agency in automated systems — In practice, however, being “in the loop” frequently means supervising outputs under conditions that make meaningful jud…
S39
Driving Social Good with AI_ Evaluation and Open Source at Scale — Human-in-the-loop evaluation must be done rigorously, especially when putting stamps of approval on model behavior
S40
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S41
Ethical AI_ Keeping Humanity in the Loop While Innovating — It was adopted back in 2021 by 193 member states of UNESCO, and it calls for human oversight, non -discrimination, respe…
S42
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leadin…
S43
Open Forum #17 AI Regulation Insights From Parliaments — Amira Saber: Yeah, thank you so much. And it’s a pleasure to be talking on this panel amid esteemed colleagues. Actually…
S44
Balancing innovation and oversight: AI’s future requires shared governance — At IGF 2024, day two in Riyadh, policymakers, tech experts, and corporate leaders discussed one of the most pressing dil…
S45
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — This comment reinforced the toolkit approach discussed in the first segment by validating the need for flexible, adaptiv…
S46
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Economic | Human rights principles Quote from UNDP Human Development Report 2025 stating that innovation incentives fav…
S47
The WSIS Moon Shot: Celebrating 20 years and crystal-balling the next 20! — **Private Sector Investment:** Maria Fernanda Garza from the International Chamber of Commerce acknowledged the private …
S48
Rewriting Development / Davos 2025 — Lord Nicholas Stern: I think we now have an imperative around investment, the investment necessary to build a sustaina…
S49
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — While both speakers acknowledge the importance of governance, there’s an unexpected difference in their emphasis on who …
S50
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — She explains that private sector will invest in expensive compute facilities, but government and donor organizations mus…
S51
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S52
Secure Finance Risk-Based AI Policy for the Banking Sector — -India’s Strategic AI Positioning: Discussion centered on how India should position itself globally in AI governance, le…
S53
AI for agriculture Scaling Intelegence for food and climate resiliance — Maharashtra’s strategic approach represents a shift from pilot projects to population-scale implementation. The state’s …
S54
AI Meets Agriculture Building Food Security and Climate Resilien — Chief Minister Devendra Fadnavis presented Maharashtra’s Maha Agri AI Policy 2025-2029, emphasizing the shift from demon…
S55
Digital solutions for sustainability: ICT’s role in GHG reduction and biodiversity protection — **Scaling Beyond Pilots**: Moving from successful pilot projects to global implementation, particularly in resource-cons…
S56
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Aishwarya Salvi:you you you you hello everyone, a warm welcome to you all who have joined us in this room and also to ev…
S57
Empowering People with Digital Public Infrastructure — 1. Ensuring DPI systems are built on data that represents currently underserved communities, including data that isn’t y…
S58
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — However, there are concerns that need to be addressed when implementing DPI. One major concern is the risk of exclusion …
S59
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — As AI models continue to grow in size, selecting appropriate training data becomes increasingly challenging. This recogn…
S60
DC-Inclusion &amp; DC-PAL: Transformative digital inclusion: Building a gender-responsive and inclusive framework for the underserved — Hu highlights the significant gender gap in the development of frontier technologies like AI and quantum computing. She …
S61
Can AI help achieve gender equality? — UNESCO in Brazillaunchedthe Portuguese version of the report ‘The Effects of AI on the Working Lives of Women’, which wa…
S62
WS #270 Understanding digital exclusion in AI era — The discussion underscored the urgency of taking action to prevent further widening of the digital divide as AI technolo…
S63
Building Indias Digital and Industrial Future with AI — “India, surely for the vast amount of experience and scale and heterogeneity that it has, offers excellent evidence on w…
S64
Building Scalable AI Through Global South Partnerships — India’s AI mission offers several innovations for global sharing. The country has created compute infrastructure availab…
S65
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Cina Lawson: Thank you very much, so the first comment I make is that AI has to work for us. It means that we have to ma…
S66
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S67
Agentic AI in Focus Opportunities Risks and Governance — -Enterprise Guardrails and Risk Management: Panelists emphasized the critical importance of implementing robust safety m…
S68
WS #31 Cybersecurity in AI: balancing innovation and risks — Melodena Stephens: So this is a tough one, right? Because when I look at ethics, I think ethics are great. The line b…
S69
9821st meeting — At the heart of the development and use of artificial intelligence systems, human beings and their dignity must always b…
S70
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S71
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — Abeer Alsumait: Thank you. So I think this question actually relates to what Dr. Lopez mentioned. The keywords here a…
S72
AI for Good – food and agriculture — ## Major Discussion Points Dongyu Qu: Excellencies, ladies, gentlemen, good morning. A year ago, we all gathered for th…
S73
Shaping the Future AI Strategies for Jobs and Economic Development — Transparency, auditability, grievance redress, open architecture are not compliance burdens. They’re adoption accelerato…
S74
Driving Indias AI Future Growth Innovation and Impact — Trust infrastructure is as critical as technical infrastructure, requiring institutional safeguards, transparency, and e…
S75
© 2019, United Nations — India offers an experiment in publicly-owned data platforms. Proposals for FarmerZone, a cloud-based, open-…
S76
From data to impact: Digital Product Information Systems and the importance of traceability for global environmental governance — – Integrating DPI systems into e-waste management technical regulations and Extended Producer Responsibility frameworks …
S77
Development of Cyber capacities in emerging economies | IGF 2023 Open Forum #6 — Audience:Okay, my name is James Ndolufuyi from Abuja, Nigeria. I have a comment and then a question. First to Chris, on …
S78
Increasing routing security globally through cooperation | IGF 2023 WS #339 — Katsuyasu Toyama:Next is Katsuyasu Toyama from JPNAP and APIX. Probably more technical perspective. Yeah, thank you very…
S79
DC-DNSI: Beyond Borders – NIS2’s Impact on Global South — – AI governance frameworks and policies emerging in different regions of the global majority (e.g. Africa, Latin America…
S80
Measuring Gender Digital Inequality in the Global South — In conclusion, the Equals Coalition, along with partners such as KAIST and Professor Michael Best, is actively working t…
S81
EQUAL Global Partnership Research Coalition Annual Meeting | IGF 2023 — A paradox exists where women, despite being motivated to learn advanced skills, face limited career advancement due to g…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Devendra Fadnavis
3 arguments106 words per minute1101 words619 seconds
Argument 1
AI is essential to secure food, nutrition, farmer incomes and economic stability in India
EXPLANATION
Fadnavis argues that AI is a critical tool to address the mounting pressures on food systems, climate volatility, and economic stability, ensuring food and nutrition security as well as higher farmer incomes across India.
EVIDENCE
He outlines the multiple stresses on agriculture, including climate volatility, falling water tables, deteriorating soil health, fragile supply chains and unpredictable markets, which together threaten food security [41-48]. He then emphasizes that AI can provide hyper-localised solutions such as predictive credit scoring, transparent supply chains and real-time market advisories to meet these challenges [52-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The significance of AI for food security, farmer incomes and economic stability is highlighted in the AI Meets Agriculture discussion and the AI for agriculture scaling initiative [S2] [S1].
MAJOR DISCUSSION POINT
Strategic priority of AI for food and climate resilience
AGREED WITH
Vikas Chandra Rastogi
Argument 2
AI must be built on trusted data, ethical governance, transparency, auditability and public accountability
EXPLANATION
Fadnavis stresses that without trustworthy data and robust ethical frameworks, AI cannot achieve scale or public confidence, and therefore governance, transparency and accountability are non‑negotiable foundations.
EVIDENCE
He states that AI is not magic and must be built on trusted data, ethical governance and public accountability, warning that without trust scaling will not happen [53-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Fadnavis’ call for trusted data, ethical governance and public accountability is echoed in sources on collective AI and trust frameworks, as well as the AI for agriculture overview [S14] [S15] [S1].
MAJOR DISCUSSION POINT
Governance, trust, ethics and responsible AI deployment
AGREED WITH
Dr. Soumya Swaminathan, Shankar Maruwada
DISAGREED WITH
Johannes Zutt
Argument 3
Maharashtra invites venture capital, impact investors, multilateral banks and philanthropic foundations to fund and scale agri‑tech startups
EXPLANATION
Fadnavis calls on a broad range of private and public capital providers to partner with Maharashtra’s agri‑innovation ecosystem, highlighting the need for investment to move AI solutions from pilots to scalable platforms.
EVIDENCE
He explicitly invites venture capital funds, impact investors, multilateral development banks and philanthropic foundations to collaborate with Maharashtra’s agri-innovation ecosystem [86-89].
MAJOR DISCUSSION POINT
Role of private sector, innovation and global collaboration
AGREED WITH
Vikas Chandra Rastogi, Dr. Soumya Swaminathan
V
Vikas Chandra Rastogi
4 arguments110 words per minute1813 words985 seconds
Argument 1
Maharashtra’s Maha Agri AI Policy 2025‑2029 operationalises AI across advisory, market, traceability and research services
EXPLANATION
Rastogi describes the state’s AI policy as a comprehensive framework that embeds AI into public agricultural systems, covering advisory services, market information, product traceability, research and capacity building.
EVIDENCE
He notes the launch of the Maha Agri AI Policy 2025-2029 and lists its uses for pharma advisory services, market information, data exchange, product traceability, innovation, research and stakeholder capacity building [20-22]. He also cites Mahavistar’s multilingual advisory reach and the AgriStrike platform that links farmers to schemes, as concrete implementations of the policy [23-26].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The launch and scope of the Maha Agri AI Policy 2025-2029 were presented by the Chief Minister and detailed in the AI Meets Agriculture session and the scaling intelligence briefing [S2] [S1].
MAJOR DISCUSSION POINT
AI as a strategic priority for food and climate resilience
AGREED WITH
Devendra Fadnavis
Argument 2
Open, federated, consent‑driven data exchange (Maha AgEx) creates a “big picture” for AI models and predictive governance
EXPLANATION
Rastogi explains that the Maha AgEx architecture aggregates diverse agricultural datasets in a consent‑based, open and federated manner, enabling comprehensive AI modelling and early‑warning predictive governance.
EVIDENCE
He describes Maha AgEx as an open, federated and consent-driven architecture that brings diverse data sets together to provide a big picture for AI models and predictive governance [26-27]. He later refers to predictive governance in action through early warnings for cotton growers that reduce crop vulnerability and financial risk [66-67].
MAJOR DISCUSSION POINT
Building digital public infrastructure and data ecosystems
AGREED WITH
Dr. Devesh Chaturvedi, Shankar Maruwada
Argument 3
Partnerships with MSSRF aim to embed women’s rights and nutritional security into AI‑enabled agricultural systems
EXPLANATION
Rastogi highlights collaboration with the M.S. Swaminathan Research Foundation to ensure that AI‑driven agricultural platforms address women’s rights and broader nutritional outcomes, integrating gender considerations into system design.
EVIDENCE
He mentions a feedback mechanism in Mahavistar that addresses inclusivity and notes ongoing joint work with MSSRF on bringing women’s rights to the centre of farming and on creating bio-happiness and nutritional security through university and educational systems [257-264].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaboration with the M.S. Swaminathan Research Foundation on women’s rights and nutritional security is noted in the AI for agriculture summary [S1].
MAJOR DISCUSSION POINT
Ensuring inclusion, gender equity and empowerment of women farmers
AGREED WITH
Devendra Fadnavis, Dr. Soumya Swaminathan
DISAGREED WITH
Dr. Soumya Swaminathan
Argument 4
AI Impact Summit and AI for Agri 2026 conference will catalyse South‑South knowledge exchange and showcase scalable solutions
EXPLANATION
Rastogi points to upcoming global events as platforms for sharing AI‑for‑agriculture innovations, fostering South‑South collaboration and demonstrating scalable solutions to a wider audience.
EVIDENCE
He invites participants to the AI for Agri conference in Mumbai and frames the panel discussion as a precursor to deeper deliberations at the AI for Agri 2026 Global Conference, emphasizing the role of these gatherings in operationalising AI at scale [207-210].
MAJOR DISCUSSION POINT
Role of private sector, innovation and global collaboration
AGREED WITH
Johannes Zutt
D
Dr. Devesh Chaturvedi
2 arguments163 words per minute1127 words414 seconds
Argument 1
Central‑state collaboration must align AI deployments with national architecture while allowing local innovation
EXPLANATION
Chaturvedi stresses the need for a coordinated framework where AI solutions adhere to a common national architecture yet retain flexibility for state‑specific agro‑climatic and socio‑economic contexts.
EVIDENCE
He outlines that the central and state governments are working together on a digital public infrastructure, emphasizing alignment with national architecture while permitting states to innovate based on local conditions [140-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for coordinated central-state alignment of AI with a common national architecture, while permitting local innovation, is outlined in the discussion on digital public infrastructure involving both levels of government [S1].
MAJOR DISCUSSION POINT
AI as a strategic priority for food and climate resilience
Argument 2
Farmer IDs, digital crop surveys and a unified platform (Bharatvistar/Mahavistar) eliminate “digital red‑tapism” and enable personalized, multilingual advisories
EXPLANATION
Chaturvedi describes how unique farmer IDs, comprehensive crop surveys and a single AI‑powered platform consolidate fragmented services, removing bureaucratic duplication and delivering tailored, multilingual advice to farmers.
EVIDENCE
He explains that prior fragmented apps created a “digital red-tapism” where farmers struggled to navigate multiple services, and that a unified AI-based platform now provides weather, crop, pest, market and scheme advisories in multiple languages, reducing the need for multiple applications [131-138]. He also details the creation of nearly 9 crore farmer IDs that link land, crops, soil health and enable personalized, consent-based advisories [140-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The unified Bharatvistar/Mahavistar platform, farmer ID system and the elimination of digital red-tapism are highlighted in the scaling intelligence briefing and the AI Meets Agriculture report [S1] [S2].
MAJOR DISCUSSION POINT
Building digital public infrastructure and data ecosystems
AGREED WITH
Vikas Chandra Rastogi, Shankar Maruwada
D
Dr. Soumya Swaminathan
3 arguments173 words per minute1125 words388 seconds
Argument 1
Women’s land‑ownership gaps risk excluding them from AI‑driven services; data collection must deliberately capture women’s information
EXPLANATION
Swaminathan warns that because most land titles remain in men’s names, women farmers risk being omitted from AI‑based services unless data systems are deliberately designed to capture women’s ownership and activity data.
EVIDENCE
She notes that only a minority of women have land in their name, citing recent census data showing about a quarter of properties now include women, but the majority remain excluded, which would cause AI systems relying on public data to miss them [227-230].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gender gaps in land ownership and the risk of women’s exclusion from AI services are discussed, with reference to census data and AI bias concerns in the AI Meets Agriculture session and gender-inclusivity workshop materials [S2] [S20].
MAJOR DISCUSSION POINT
Ensuring inclusion, gender equity and empowerment of women farmers
AGREED WITH
Devendra Fadnavis, Vikas Chandra Rastogi
DISAGREED WITH
Vikas Chandra Rastogi
Argument 2
AI solutions should reduce women’s drudgery, be co‑designed with women, and keep humans in the loop for safety and employment
EXPLANATION
She advocates that AI tools must be designed to lessen the physical workload of women farmers, involve women in the design process, and retain human oversight to ensure safety, prevent bias and preserve rural employment.
EVIDENCE
She lists benchmarks such as reducing drudgery for women, co-designing solutions, and maintaining humans in the loop, emphasizing the need for iterative feedback, bias checks and evaluation to avoid unintended harms [235-247].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Benchmarks for reducing women’s drudgery, co-designing AI tools with women, and maintaining human oversight are emphasized in stakeholder feedback and gender-inclusivity discussions [S5] [S20].
MAJOR DISCUSSION POINT
Ensuring inclusion, gender equity and empowerment of women farmers
AGREED WITH
Devendra Fadnavis, Vikas Chandra Rastogi
Argument 3
Continuous evaluation, bias checks and feedback loops are required to keep AI services reliable and farmer‑centric
EXPLANATION
Swaminathan calls for ongoing scientific evaluation of AI applications, including bias detection, risk assessment and feedback mechanisms, to ensure that AI remains effective, inclusive and trustworthy for farmers.
EVIDENCE
She draws on her experience as a medical researcher to stress the importance of clinical-trial-like evaluation, monitoring for bias, unanticipated risks and ensuring that farmer voices are part of advisory committees, highlighting the need for iterative improvement and human oversight [239-247].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for ongoing evaluation, bias detection, risk assessment and feedback mechanisms align with research on trust, provenance and responsible AI governance [S15] [S14].
MAJOR DISCUSSION POINT
Governance, trust, ethics and responsible AI deployment
AGREED WITH
Devendra Fadnavis, Shankar Maruwada
DISAGREED WITH
Devendra Fadnavis
S
Shankar Maruwada
2 arguments133 words per minute1259 words567 seconds
Argument 1
Interoperable, open‑standard networks (e.g., Beacon protocol) are the backbone for scaling AI across sectors
EXPLANATION
Maruwada explains that open, interoperable network protocols such as Beacon enable different stakeholders to share data and services seamlessly, providing the infrastructure needed for AI to scale across agriculture and other sectors.
EVIDENCE
He describes collaborative open networks built on open protocols like Beacon, noting that these enable data sharing among governments, academia and private innovators, forming the backbone for scaling AI applications [305-306].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Beacon protocol as an open-standard, interoperable network for scaling AI is referenced in the AI for agriculture scaling intelligence document [S1].
MAJOR DISCUSSION POINT
Building digital public infrastructure and data ecosystems
AGREED WITH
Vikas Chandra Rastogi, Dr. Devesh Chaturvedi
Argument 2
Open, interoperable DPI models provide the governance framework to prevent data exploitation and ensure scalability
EXPLANATION
Maruwada argues that the same open, interoperable principles that underpinned India’s Digital Public Infrastructure (DPI) can be applied to AI, ensuring data is shared responsibly, preventing exploitation and allowing solutions to scale nationally.
EVIDENCE
He references the experience of DPI, emphasizing open, interoperable systems that avoid “digital red-tapism” and enable scalable, trustworthy AI deployments, likening the approach to India’s railway network as a shared backbone [304-307].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open, interoperable Digital Public Infrastructure models for trustworthy data sharing and scalable AI are discussed in the trust and DPI literature and the AI for agriculture overview [S15] [S1].
MAJOR DISCUSSION POINT
Governance, trust, ethics and responsible AI deployment
AGREED WITH
Devendra Fadnavis, Dr. Soumya Swaminathan
J
Johannes Zutt
2 arguments143 words per minute907 words377 seconds
Argument 1
Private‑sector creativity fuels diverse AI applications (e.g., pest detection, water‑use advice) that can be “crowd‑in” through supportive policies
EXPLANATION
Zutt highlights that private innovators develop a wide range of AI tools for farmers, and that policy frameworks should encourage this creativity by providing financing and a regulatory environment that allows many solutions to emerge and be tested.
EVIDENCE
He notes that the private sector brings creativity, producing applications such as pest detection and water-use advice, and that governments should “crowd-in” this capacity through supportive policies [194-199]. He gives a concrete example of a Moroccan tomato farmer’s app that estimates water needs from a photo, illustrating the type of innovation that can be financed and scaled [202-204].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Private-sector innovation, including examples like a Moroccan tomato farmer’s water-use app, is highlighted in the AI Meets Agriculture session as a case for crowd-in policies [S2].
MAJOR DISCUSSION POINT
Role of private sector, innovation and global collaboration
DISAGREED WITH
Devendra Fadnavis
Argument 2
AI Impact Summit and AI for Agri 2026 conference will catalyse South‑South knowledge exchange and showcase scalable solutions
EXPLANATION
Zutt argues that India’s leadership in AI for agriculture positions it to drive South‑South learning, and that global summits provide a venue for sharing best practices, scaling solutions and fostering international partnerships.
EVIDENCE
He states that because India is a large, diverse developing country, its experience will generate spill-over learnings for other nations, making it a central hub for South-South knowledge exchange [211-218].
MAJOR DISCUSSION POINT
Role of private sector, innovation and global collaboration
AGREED WITH
Vikas Chandra Rastogi
Agreements
Agreement Points
AI is positioned as a strategic priority to secure food, nutrition, farmer incomes and economic stability in India
Speakers: Devendra Fadnavis, Vikas Chandra Rastogi
AI is essential to secure food, nutrition, farmer incomes and economic stability in India Maharashtra’s Maha Agri AI Policy 2025‑2029 operationalises AI across advisory, market, traceability and research services
Both speakers stress that AI is a critical tool for strengthening India’s food systems, improving farmer livelihoods and underpinning economic stability, and they present concrete policy and platform initiatives that embed AI at scale [41-48][52-53][20-22][23-26].
POLICY CONTEXT (KNOWLEDGE BASE)
The World Bank emphasizes AI’s role in India’s food security and calls for collaborative ecosystems where government provides foundational infrastructure while private innovation drives applications [S35]; India’s AI strategy highlights leveraging its digital public infrastructure (UPI, digital ID) to achieve strategic AI positioning for inclusive growth [S52]; recent discussions on transforming agriculture underscore AI’s potential for resilient, inclusive food systems [S51].
AI systems must be built on trusted data, ethical governance, transparency, auditability and public accountability
Speakers: Devendra Fadnavis, Dr. Soumya Swaminathan, Shankar Maruwada
AI must be built on trusted data, ethical governance, transparency, auditability and public accountability Continuous evaluation, bias checks and feedback loops are required to keep AI services reliable and farmer‑centric Open, interoperable DPI models provide the governance framework to prevent data exploitation and ensure scalability
All three emphasize that without trustworthy data and robust ethical frameworks AI cannot scale; they call for transparent, auditable systems, ongoing scientific evaluation and open-interoperable DPI to safeguard against misuse [53-56][239-247][304-307].
POLICY CONTEXT (KNOWLEDGE BASE)
UN Security Council resolutions stress transparency, explainability and accountability as core to trustworthy AI [S31]; the AI policy roadmap lists accountability, transparency and ethical governance among its core principles [S33]; UNESCO’s AI ethics recommendations call for human oversight, non-discrimination and public accountability [S41]; the UN-DP report on critical AI infrastructure underlines the need for trusted data and auditability [S34].
Open, interoperable data exchange and digital public infrastructure are essential backbones for scaling AI in agriculture
Speakers: Vikas Chandra Rastogi, Dr. Devesh Chaturvedi, Shankar Maruwada
Open, federated, consent‑driven data exchange (Maha AgEx) creates a “big picture” for AI models and predictive governance Farmer IDs, digital crop surveys and a unified platform (Bharatvistar/Mahavistar) eliminate “digital red‑tapism” and enable personalized, multilingual advisories Interoperable, open‑standard networks (e.g., Beacon protocol) are the backbone for scaling AI across sectors
The speakers converge on the need for open, consent-based data sharing architectures, unified farmer-centric platforms and open-standard networks to provide the comprehensive data foundation required for AI-driven predictive governance and large-scale deployment [26-27][66-67][131-138][140-148][305-306][304-307].
POLICY CONTEXT (KNOWLEDGE BASE)
AI as critical infrastructure requires open, interoperable data and sovereign control to be trustworthy [S34]; the World Bank notes that government-provided digital public infrastructure is a prerequisite for scaling AI-driven agricultural services [S35]; India’s strategic AI positioning leverages its existing digital public infrastructure to support AI rollout in agriculture [S52]; OECD’s flexible governance toolkit highlights interoperable data exchange as a key enabler for trustworthy AI deployment [S45].
Ensuring gender equity and women’s inclusion in AI‑enabled agricultural services
Speakers: Devendra Fadnavis, Vikas Chandra Rastogi, Dr. Soumya Swaminathan
Maharashtra invites venture capital, impact investors, multilateral banks and philanthropic foundations to fund and scale agri‑tech startups Partnerships with MSSRF aim to embed women’s rights and nutritional security into AI‑enabled agricultural systems Women’s land‑ownership gaps risk excluding them from AI‑driven services; data collection must deliberately capture women’s information AI solutions should reduce women’s drudgery, be co‑designed with women, and keep humans in the loop for safety and employment
All three highlight the importance of gender-focused policies: Fadnavis calls for investment with gender equity as a mantra, Rastogi notes collaboration with MSSRF to embed women’s rights, and Swaminathan warns that land-ownership gaps could exclude women unless data systems are designed to capture their information and reduce their workload [83-86][257-264][227-236][242-247].
POLICY CONTEXT (KNOWLEDGE BASE)
IGF 2023 highlighted AI-driven gender inclusivity measures, urging stakeholder engagement to create policies that address diverse needs and promote equality [S42]; the AI policy roadmap stresses inclusivity and diversity as foundational principles for equitable AI outcomes [S33]; UNESCO’s AI ethics framework calls for non-discrimination and respect for cultural diversity, supporting gender-focused interventions [S41].
International conferences (AI Impact Summit, AI for Agri 2026) as platforms for South‑South knowledge exchange and scaling solutions
Speakers: Vikas Chandra Rastogi, Johannes Zutt
AI Impact Summit and AI for Agri 2026 conference will catalyse South‑South knowledge exchange and showcase scalable solutions AI Impact Summit and AI for Agri 2026 conference will catalyse South‑South knowledge exchange and showcase scalable solutions
Both speakers point to the AI Impact Summit and the upcoming AI for Agri 2026 conference as key venues for sharing best practices, fostering South-South collaboration and demonstrating scalable AI applications in agriculture [207-210][211-218].
Similar Viewpoints
Both underline the pivotal role of private‑sector innovation and the need for financing mechanisms that enable a multitude of AI solutions to be developed, tested and scaled for farmers [86-89][194-199][202-204].
Speakers: Devendra Fadnavis, Johannes Zutt
Maharashtra invites venture capital, impact investors, multilateral banks and philanthropic foundations to fund and scale agri‑tech startups Private‑sector creativity fuels diverse AI applications (e.g., pest detection, water‑use advice) that can be “crowd‑in” through supportive policies
Both stress that reliable, trustworthy data foundations (farmer IDs, unified platforms) are essential for responsible AI deployment and for achieving scale in agricultural services [53-56][131-138][140-148].
Speakers: Devendra Fadnavis, Dr. Devesh Chaturvedi
AI must be built on trusted data, ethical governance, transparency, auditability and public accountability Farmer IDs, digital crop surveys and a unified platform (Bharatvistar/Mahavistar) eliminate “digital red‑tapism” and enable personalized, multilingual advisories
Both advocate for open, interoperable network architectures as the technical backbone that enables AI models to access comprehensive data and scale across regions and sectors [26-27][305-306][304-307].
Speakers: Vikas Chandra Rastogi, Shankar Maruwada
Open, federated, consent‑driven data exchange (Maha AgEx) creates a “big picture” for AI models and predictive governance Interoperable, open‑standard networks (e.g., Beacon protocol) are the backbone for scaling AI across sectors
Unexpected Consensus
Human‑in‑the‑loop oversight and rigorous evaluation of AI tools
Speakers: Devendra Fadnavis, Dr. Soumya Swaminathan
AI must be built on trusted data, ethical governance, transparency, auditability and public accountability Continuous evaluation, bias checks and feedback loops are required to keep AI services reliable and farmer‑centric
A senior political leader and a medical researcher converge on the need for scientific, human-centered oversight of AI applications-an alignment that bridges policy and health-science perspectives and was not explicitly anticipated [53-56][239-247].
POLICY CONTEXT (KNOWLEDGE BASE)
Research on human agency warns that meaningful human-in-the-loop oversight can be compromised under pressure, underscoring the need for robust evaluation frameworks [S38]; best-practice guidelines stress rigorous human-in-the-loop evaluation before granting model approvals [S39]; UNESCO’s principles explicitly require human oversight to safeguard ethical AI deployment [S41].
Overall Assessment

The discussion shows strong convergence across political, administrative, research and private‑sector participants on four core pillars: (1) AI as essential for food security and farmer prosperity; (2) the necessity of trusted data, ethical governance and continuous evaluation; (3) the centrality of open, interoperable digital public infrastructure and data exchange; (4) gender‑inclusive design and South‑South knowledge sharing through global forums.

High consensus – the overlapping arguments indicate a shared vision that can translate into coordinated policy actions, investment strategies and collaborative research, thereby strengthening the momentum for responsible, inclusive AI deployment in agriculture.

Differences
Different Viewpoints
Approach to ensuring trustworthy AI governance versus fostering rapid private‑sector innovation
Speakers: Devendra Fadnavis, Johannes Zutt
AI must be built on trusted data, ethical governance, transparency, auditability and public accountability Private‑sector creativity fuels diverse AI applications (e.g., pest detection, water‑use advice) that can be “crowd‑in” through supportive policies
Fadnavis stresses that AI cannot scale without trusted data and strong ethical governance, calling for transparent, auditable systems before large-scale deployment [53-56]. Zutt, while acknowledging the government’s role in governance, emphasizes the need to quickly crowd-in private-sector innovators and provide financing and agile support, focusing on rapid experimentation and scaling rather than detailed pre-deployment governance frameworks [184-190][194-199]. This reflects a tension between a precautionary, governance-first approach and a more innovation-driven, market-led approach.
POLICY CONTEXT (KNOWLEDGE BASE)
IGF 2024 debates highlighted the tension between fostering large-scale AI innovation and maintaining ethical governance, calling for shared, adaptive oversight models [S44]; OECD’s toolkit advocates flexible, context-specific governance rather than one-size-fits-all, reflecting this trade-off [S45]; UNDP’s 2025 report warns that innovation incentives often prioritize speed over transparency and inclusion, illustrating the governance-first versus rapid-deployment dilemma [S46]; parliamentary versus private-sector leadership discussions further expose divergent views on who should steer AI governance [S49].
How to guarantee women farmers’ inclusion in AI‑driven services
Speakers: Dr. Soumya Swaminathan, Vikas Chandra Rastogi
Women’s land‑ownership gaps risk excluding them from AI‑driven services; data collection must deliberately capture women’s information Partnerships with MSSRF aim to embed women’s rights and nutritional security into AI‑enabled agricultural systems
Swaminathan points out that because most land titles remain in men’s names, women are likely to be omitted from AI services unless data systems are deliberately designed to capture women’s ownership and activity data [227-230]. Rastogi mentions collaboration with the M.S. Swaminathan Research Foundation to bring women’s rights to the centre of AI systems but does not specify mechanisms for addressing the land-ownership data gap, focusing instead on broader partnership goals and nutritional security [257-264]. The disagreement lies in the level of concrete data-capture measures required versus broader partnership commitments.
POLICY CONTEXT (KNOWLEDGE BASE)
The IGF 2023 workshop on gender inclusivity outlines concrete policy levers-such as participatory design and targeted outreach-to ensure AI services reach women farmers [S42]; broader AI policy frameworks stress inclusivity and diversity as essential for equitable outcomes [S33]; UNESCO’s ethics guidelines reinforce the need for non-discriminatory design, directly relevant to women’s agricultural participation [S41].
Necessity of systematic, scientific evaluation of AI tools
Speakers: Dr. Soumya Swaminathan, Devendra Fadnavis
Continuous evaluation, bias checks and feedback loops are required to keep AI services reliable and farmer‑centric AI is not magic. As Honorable PM said in his inaugural session, AI must be built on trusted data, ethical governance, and public accountability
Swaminathan calls for ongoing, clinical-trial-like evaluation of AI applications, including bias detection, risk assessment, and farmer feedback loops to ensure reliability and inclusivity [239-247]. Fadnavis emphasizes the need for trusted data and ethical governance but does not articulate a structured, continuous evaluation regime, focusing instead on scaling and investment [53-56]. This creates a divergence on whether rigorous, systematic evaluation should be a core pillar of AI deployment.
POLICY CONTEXT (KNOWLEDGE BASE)
Human-in-the-loop evaluation must be conducted rigorously to avoid premature certification of AI behavior, as highlighted in recent AI evaluation best-practice discussions [S39]; literature on overreliance warns that without systematic scientific assessment, AI outputs can be biased or incomplete, especially in complex domains [S40]; the challenges of maintaining meaningful human judgment further support the call for structured evaluation protocols [S38].
Unexpected Differences
Emphasis on large‑scale private investment versus cautious, governance‑first rollout
Speakers: Devendra Fadnavis, Johannes Zutt
Maharashtra invites venture capital, impact investors, multilateral banks and philanthropic foundations to fund and scale agri‑tech startups Private‑sector creativity fuels diverse AI applications (e.g., pest detection, water‑use advice) that can be “crowd‑in” through supportive policies
While both speakers support private-sector involvement, Fadnavis frames it within a structured, policy-driven investment drive emphasizing accountability and large-scale funding [86-89], whereas Zutt advocates a more flexible, rapid “crowd-in” of private innovators with less emphasis on pre-defined governance structures [194-199]. The contrast between a formal, investment-heavy approach and a more agile, experimental partnership model was not anticipated given the overall consensus on public-private collaboration.
POLICY CONTEXT (KNOWLEDGE BASE)
UNDP’s 2025 report notes that private-sector driven AI investments often sideline transparency, fairness and social inclusion, underscoring the need for governance-first approaches [S46]; the International Chamber of Commerce stresses that while private investment is vital, public policies must encourage responsible deployment rather than deter it [S47]; blended financing models advocated for African AI ecosystems recommend combining private compute resources with public early-stage funding to balance speed and oversight [S50]; IGF discussions on balancing innovation and oversight echo this tension [S44].
Overall Assessment

The panel largely shares a common vision that AI is crucial for India’s food security, climate resilience, and farmer livelihoods, and that open, interoperable digital public infrastructure is the foundation for scaling. However, clear points of contention emerge around (i) the balance between strict, governance‑first frameworks and rapid, private‑sector‑driven innovation; (ii) the concrete mechanisms for ensuring women’s inclusion, especially data capture of land ownership; and (iii) the extent to which systematic, scientific evaluation should be embedded in AI deployment.

Moderate disagreement – the divergences are primarily about implementation pathways rather than fundamental goals. These differences could affect the speed, inclusivity, and trustworthiness of AI roll‑out in agriculture, requiring careful negotiation to align governance standards with innovation incentives and gender‑inclusive data policies.

Partial Agreements
All speakers concur that AI is a strategic priority for transforming Indian agriculture and that open, interoperable digital infrastructure is essential for scaling solutions. They differ mainly in emphasis—policy design (Rastogi), governance (Fadnavis), central‑state coordination (Chaturvedi), private‑sector innovation (Zutt), and technical standards (Maruwada)—but share the common goal of deploying AI at population scale [41-48][20-22][140-148][194-199][305-306].
Speakers: Devendra Fadnavis, Vikas Chandra Rastogi, Dr. Devesh Chaturvedi, Johannes Zutt, Shankar Maruwada
AI is essential to secure food, nutrition, farmer incomes and economic stability in India Maharashtra’s Maha Agri AI Policy 2025‑2029 operationalises AI across advisory, market, traceability and research services Central‑state collaboration must align AI deployments with national architecture while allowing local innovation Private‑sector creativity fuels diverse AI applications (e.g., pest detection, water‑use advice) that can be “crowd‑in” through supportive policies Interoperable, open‑standard networks (e.g., Beacon protocol) are the backbone for scaling AI across sectors
All three agree on the importance of gender inclusion and the need for open, inclusive data systems, though Swaminathan stresses specific data‑capture mechanisms, Rastogi highlights partnership initiatives, and Maruwada focuses on the broader DPI governance framework to protect against exploitation [227-230][257-264][304-307].
Speakers: Dr. Soumya Swaminathan, Vikas Chandra Rastogi, Shankar Maruwada
Women’s land‑ownership gaps risk excluding them from AI‑driven services; data collection must deliberately capture women’s information Partnerships with MSSRF aim to embed women’s rights and nutritional security into AI‑enabled agricultural systems Open, interoperable DPI models provide the governance framework to prevent data exploitation and ensure scalability
Takeaways
Key takeaways
AI is positioned as a strategic priority for achieving food security, nutrition, farmer income stability and climate resilience in India. Maharashtra’s Maha Agri AI Policy 2025‑2029 operationalises AI across advisory services, market information, traceability, research and capacity building, moving from pilots to full‑scale deployments. A unified digital public infrastructure—farmer IDs, digital crop surveys, and the Bharatvistar/Mahavistar platform—eliminates fragmented “digital red‑tapism” and enables personalized, multilingual, consent‑driven advisories. Open, federated and interoperable data exchange mechanisms (Maha AgEx, Beacon protocol) are essential to create a “big picture” for AI models and predictive governance. Inclusion and gender equity are critical; women’s land‑ownership gaps and digital literacy barriers must be addressed, and AI solutions should be co‑designed with women farmers and keep humans in the loop. Responsible AI deployment requires trusted data, ethical governance, transparency, auditability, and continuous bias and impact evaluation. Private‑sector innovation, venture capital, multilateral financing and philanthropic support are needed to scale agri‑tech solutions, with Maharashtra inviting global partners and investors. Global platforms such as the AI Impact Summit and AI for Agri 2026 conference are envisioned as catalysts for South‑South knowledge exchange and collaborative scaling of AI solutions.
Resolutions and action items
Scale Mahavistar to >2.5 million farmers, add additional regional languages (including tribal language Bili) and expand multilingual voice‑based advisory capabilities. Deploy the Maha AgEx consent‑driven, federated data exchange to integrate diverse datasets (pest images, weather, market, soil health) for AI model training. Complete rollout of Bharatvistar/Mahavistar predictive advisory services (weather, pest, market, scheme status) within the next 3‑6 months. Accelerate farmer‑ID and digital crop‑survey saturation across states to underpin AI‑driven personalized services. Co‑develop traceability DPI modules with the United States and other partners, making them open, replicable public‑infrastructure assets. Launch a global call for AI use‑cases in agriculture (already done) and publish the compendium of successful deployments. Invite venture capital, impact investors, development banks and philanthropic foundations to fund agri‑tech startups and capacity‑building programmes. Partner with MSSRF to embed women’s rights, nutritional security and bio‑happiness considerations into AI‑enabled agricultural systems. Establish continuous feedback loops, bias‑checking mechanisms and “human‑in‑the‑loop” governance structures for AI services. Organise and promote participation in the AI for Agri 2026 conference (22‑23 Feb, Mumbai) to deepen global collaboration.
Unresolved issues
How to systematically capture and integrate women farmers’ land‑ownership and other gender‑disaggregated data into the national AI ecosystem. Ensuring reliable connectivity and affordable smart‑phone/feature‑phone access for the most resource‑constrained farmers. Detailed operational framework for data privacy, consent management and preventing “digital red‑tapism” at scale. Specific mechanisms for ongoing bias detection, impact assessment and accountability of AI recommendations. Sustainable financing models for long‑term maintenance and scaling of AI platforms beyond initial pilot funding. Clear delineation of responsibilities and coordination mechanisms between central and state ministries for AI governance. Strategies to balance rapid AI deployment with the need for rigorous scientific validation and field testing.
Suggested compromises
Adopt a hybrid model where AI augments, rather than replaces, traditional extension services—maintaining human expertise while leveraging AI speed and scale. Implement consent‑driven, open‑standard data exchange (Maha AgEx) that respects farmer privacy while enabling interoperability across states and private providers. Design AI platforms to work on both smartphones and basic feature phones, ensuring inclusion of low‑asset farmers. Encourage private‑sector innovation (“let a thousand flowers bloom”) while using public DPI as a common backbone to avoid fragmented proprietary solutions. Combine gender‑focused co‑design processes with broader system rollout to ensure women’s needs are addressed without delaying overall deployment.
Thought Provoking Comments
AI is not a magic. As Honorable PM said in his inaugural session, AI must be built on trusted data, ethical governance, and public accountability. Without trust, scale will not happen.
It reframes AI from a hype‑driven technology to a public‑good that requires rigorous data stewardship and governance, setting a foundational principle for the entire dialogue.
This remark anchored the subsequent discussion on data trust, interoperability and governance. It prompted Dr. Devesh Chaturvedi to describe the problem of “digital red‑tapism” and led other panelists to stress transparency, auditability and ethical safeguards.
Speaker: Devendra Fadnavis
We felt that while we had initiated this process to ensure that the bureaucratic red‑tapism is removed, what we were moving towards was a sort of digital red‑tapism because within our ministry different schemes had different apps… The whole idea was that once we have this AI‑based system, we have a same platform for different applications and different advisories at a click of the button.
He identifies a concrete systemic bottleneck—fragmented digital services—and proposes a unified AI‑driven platform as the solution, turning a high‑level vision into an actionable design problem.
His explanation shifted the conversation from abstract benefits of AI to the practical need for a single, interoperable architecture. It gave context for the Maha AgEx data‑exchange initiative and reinforced the trust‑building theme introduced earlier.
Speaker: Dr. Devesh Chaturvedi
We can kind of let a thousand flowers bloom there and see what actually takes root… Just yesterday, I was learning about an application in Morocco developed by a tomato farmer who could take a picture of a plant and get the exact water requirement.
The metaphor of “a thousand flowers” encourages a pluralistic, market‑driven innovation ecosystem, while the concrete example shows how low‑cost AI can solve a pressing climate‑water problem.
This comment opened the floor to discussions on private‑sector participation, financing, and rapid prototyping. It influenced Shankar Maruwada’s later emphasis on open, shared rails that allow diverse applications to plug in.
Speaker: Johannes Zutt
Women in India, the minority of them who have their name on the land document, are often left out of publicly available data sets. We must think about how women’s data can be incorporated early, keep humans in the loop, and evaluate AI like clinical trials to avoid bias.
She links gender equity to data architecture and algorithmic bias, framing inclusion as a technical as well as a social requirement, and introduces the idea of rigorous, evidence‑based evaluation.
Her remarks redirected the dialogue toward gender‑focused design, prompting Vikas Rastogi to mention ongoing collaborations on women’s rights and reinforcing the inclusion pillar of the AI‑for‑Agriculture agenda.
Speaker: Dr. Soumya Swaminathan
What matters is the intent of the government of India which triggered the process which allowed Bharat Vistar to be launched… we deploy something minimum to start and then evolution, models get better, data gets better… Like the railways, we need open rails for AI.
He provides a clear architectural metaphor—open, interoperable “rails”—and advocates a minimum‑viable‑product approach, tying together past DPI successes with future AI scaling.
This analogy became a turning point, giving participants a concrete model for standardising AI ecosystems. It reinforced earlier calls for openness, guided the discussion on shared data standards, and culminated in his vision of 100 diffusion pathways by 2030.
Speaker: Shankar Maruwada
Because you have so many languages, so many different regions, so many different types of crops, figuring out how to make AI at the farm level work in India will automatically have a large number of spillover learnings for other countries.
He positions India as a global test‑bed, linking domestic AI deployment to South‑South knowledge exchange and emphasizing the international relevance of the Indian experience.
This comment broadened the scope of the panel from a state‑level initiative to a global learning platform, leading Vikas to ask about the role of the AI Impact Summit in fostering South‑South collaboration.
Speaker: Johannes Zutt
Overall Assessment

The discussion was steered by a handful of pivotal insights that moved it from a ceremonial launch to a substantive roadmap. Early emphasis on trust and governance set a normative baseline, which was then grounded by Dr. Chaturvedi’s diagnosis of fragmented digital services. The private‑sector’s creative potential was highlighted by Zutt’s ‘thousand flowers’ metaphor, while Swaminathan’s focus on gender‑inclusive data and human‑in‑the‑loop safeguards added depth to the equity dimension. Maruwada’s rail‑analogy and MVP approach supplied a concrete architectural vision, tying together openness, interoperability and scalability. Together, these comments reshaped the conversation, aligning stakeholders around four pillars—trust, openness, inclusion, and collaborative innovation—and framing India’s AI‑for‑agriculture effort as both a national priority and a model for global South‑South learning.

Follow-up Questions
How can we envision a central‑state collaboration framework for AI deployments that aligns with the national architecture while allowing states flexibility, and how can this collaboration be institutionalized to achieve population‑scale impact and data trust?
Coordinating AI across India requires clear governance structures that balance national standards with local innovation, ensuring interoperability, trust, and scalability of AI services for farmers.
Speaker: Vikas Chandra Rastogi (to Dr. Devesh Chaturvedi)
How can development partnerships adapt to remain agile and responsive, specifically structuring programs and technical assistance to provide just‑in‑time support to central and state governments for experimenting, iterating, and scaling AI solutions responsibly?
Timely, flexible financing and technical support are essential for governments to pilot, refine, and scale AI tools without bureaucratic delays, maximizing impact on agriculture and climate resilience.
Speaker: Vikas Chandra Rastogi (to Johannes Jutt)
How can platforms such as the AI Impact Summit and the AI for Agri global conference contribute to deeper global collaboration and South‑South knowledge exchange in AI‑driven agriculture?
International forums can facilitate sharing of best practices, lessons learned, and collaborative research, accelerating adoption of AI solutions across developing countries.
Speaker: Vikas Chandra Rastogi (to Johannes Jutt)
How can AI‑led agriculture transformation strengthen women’s agency, knowledge access, and climate resilience, and what institutional safeguards and design principles must be embedded to ensure equity and scientific integrity?
Ensuring gender‑inclusive AI systems prevents widening existing disparities and guarantees that women farmers benefit equally from technological advances.
Speaker: Vikas Chandra Rastogi (to Dr. Soumya Swaminathan)
How should we think about standardizing AI‑based ecosystems in the spirit of Digital Public Infrastructure, bring DPI principles into AI, and what architecture and governance principles are required to ensure interoperability, trust, and sustainability across sectors such as agriculture?
Establishing open standards and governance frameworks is critical for scaling AI solutions safely and efficiently across diverse regions and stakeholders.
Speaker: Vikas Chandra Rastogi (to Shankar Maruwada)
What research is needed to develop high‑quality, robust datasets that can underpin reliable AI models for pest, disease, and climate advisories?
Accurate AI predictions depend on comprehensive, clean data; gaps or biases in data can lead to ineffective or harmful recommendations for farmers.
Speaker: Vikas Chandra Rastogi
How can women’s land‑ownership and tenancy data be systematically incorporated into AI platforms to avoid exclusion of women farmers from services?
Without proper representation of women’s land rights, AI algorithms may overlook a large segment of the farming population, perpetuating gender bias.
Speaker: Dr. Soumya Swaminathan
What methodologies should be employed to evaluate AI models for inherent biases, unintended risks, and side‑effects before large‑scale deployment?
Rigorous testing ensures that AI tools do not inadvertently disadvantage certain farmer groups or produce harmful agronomic advice.
Speaker: Dr. Soumya Swaminathan
What digital‑literacy programs and capacity‑building initiatives are required to enable low‑literacy and feature‑phone users to effectively access AI‑driven advisory services?
Adoption of AI tools hinges on farmers’ ability to understand and use them; tailored training can bridge the literacy gap.
Speaker: Johannes Zutt
How can affordable connectivity solutions be designed and delivered to reach smallholder farmers in remote or underserved areas?
Limited internet access restricts the reach of AI platforms; innovative connectivity models are needed to ensure equitable service delivery.
Speaker: Johannes Zutt
What processes should be established to ‘truth‑test’ AI‑generated advisories, ensuring scientific credibility and farmer trust?
Independent validation of AI recommendations protects farmers from inaccurate advice and builds confidence in digital services.
Speaker: Johannes Zutt
What indicators should be used to measure whether AI interventions reduce drudgery and workload for women farmers?
Quantifying gender‑specific impact helps assess whether AI tools are delivering on promises of empowerment and workload reduction.
Speaker: Dr. Soumya Swaminathan
How can ‘human‑in‑the‑loop’ frameworks be integrated into AI systems to preserve employment and provide oversight in agricultural decision‑making?
Balancing automation with human expertise safeguards jobs and ensures contextual judgment in complex farming scenarios.
Speaker: Dr. Soumya Swaminathan
What open, interoperable AI standards and protocols (akin to railway networks) are needed to enable seamless sharing of models, data, and services across states and private providers?
Standardized interfaces prevent siloed solutions and accelerate diffusion of innovative AI applications nationwide.
Speaker: Shankar Maruwada
How can feedback mechanisms within platforms like Mahavistar be enhanced to continuously incorporate farmer input and improve AI recommendations?
Iterative feedback loops ensure that AI services remain relevant, accurate, and responsive to evolving farmer needs.
Speaker: Vikas Chandra Rastogi
What evaluation frameworks are required to assess the impact of AI‑enabled traceability DPI modules on food safety, export competitiveness, and consumer trust?
Understanding the economic and safety outcomes of traceability systems informs policy and encourages broader adoption.
Speaker: Devendra Fadnavis (referenced in speech)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026

AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how AI-driven operations can be leveraged to build and preserve customer trust in telecom services, noting that AI already shapes outcomes such as outage management, grievance handling and spam-fraud prevention, but must balance efficiency with false-positive reduction and privacy compliance [11-15][16-18]. Speakers emphasized the necessity of a “human-in-the-loop” to keep AI decisions from running unchecked [19-20].


Julian Gorman described the “scam economy” outpacing regulation and presented GSMA’s Cross-Sector Any-Scam Task Force, which has collected over 40 operator case studies and is piloting data-sharing proofs of concept across Asia-Pacific [26-38]. He warned that service-based rules can stifle future innovation and argued that regulation should focus on outcomes while fostering industry-wide collaboration, especially as India assumes a global telecom leadership role [39-53].


Dr Rajkumar Upadhyay showcased CDOT’s AI suite, including “Fraud Pro” that de-duplicates SIM registrations and has disconnected 70 lakh fraudulent connections, a digital intelligence platform used by banks for financial-risk scoring, and the crowdsourced “Chakshu”/“Sanchar Sati” app with over 18 million downloads that empowers users to report and block unwanted calls [65-71][73-102]. He also described an AI-federated disaster-management system that integrates alerts from IMD, CWC and other agencies, uses cell-broadcast for geo-targeted warnings, and has reduced cyclone-related deaths in Odisha to zero, a model now being promoted to the UN for global rollout [250-266][270-276].


Mathan Babu Kasilingam explained that his telecom operator follows privacy-by-design principles, holding ISO 27701 certification, and views AI adoption through pillars of responsibility, reliability, trust and privacy [113-119]. He noted early “quick-win” AI projects in fraud detection and network self-healing, but identified siloed data repositories and high infrastructure costs (80-90 % of AI spend) as major obstacles, prompting a shift toward a unified data platform and enterprise-wide LLMs [124-138][149-170][225-230]. Centralising data also simplifies compliance with India’s DPDP rules and enables scalable AI model refinement [184-187][219].


Syed Tausif Abbas introduced a voluntary AI-incident reporting schema with taxonomy covering network components, severity and cause, which can help operators analyse failures and inform regulators [193-201]. Both Abbas and Kasilingam agreed that such standardized reporting can streamline model improvement and support cross-border data sharing, a point echoed by Gorman who called for regulated sandboxes, open-gateway APIs and four pillars-network security, ecosystem exposure, customer services and digital skills-to combat scams collaboratively [201-207][285-306].


The discussion concluded with consensus that coordinated global effort, especially data sharing across borders and adherence to emerging standards, is essential for responsible AI deployment in telecoms [321-326][327].


Keypoints

Major discussion points


AI must be harnessed to protect customers and preserve trust.


Dr. Tangirala emphasized that AI-driven decisions affect users (outage management, grievance handling) and that clear, proactive communication is essential. He highlighted the tension between aggressive fraud-spam reduction and the need to avoid false positives while respecting privacy and regulations, and called for a “human-in-the-loop” safeguard [13-18][19-21].


Industry-wide, cross-sector collaboration is critical to combat scams.


Julian Gorman described the “scam economy” as faster than regulation and outlined GSMA’s Cross-Sector Any Scam Task Force, which has gathered >40 operator case studies and is developing data-sharing proofs of concept. He stressed that regulation should focus on outcomes and that global cooperation-especially India’s emerging leadership-is needed to keep innovation alive while fighting fraud [26-35][36-44][45-53].


AI-powered solutions from CDOT illustrate concrete use-cases for fraud, identity-deduplication, financial risk, and disaster management.


Dr. Rajkumar Upadhyay presented tools such as “Fraud Pro” (detecting duplicate SIM registrations), a digital intelligence platform for banking risk scores, the crowdsourced “Chakshu” app, and an AI-driven early-warning system that fuses meteorological data and cell-broadcast alerts to save lives [65-73][74-84][85-102][241-276].


Service providers are grappling with AI adoption, data silos, infrastructure costs, and privacy-by-design.


Mathan Babu Kasilingam explained the provider’s journey: certification under ISO 27701, the shift from isolated data lakes to a unified AI platform, the challenge of massive GPU/compute spend (≈80-90 % of AI cost), and the need to balance “quick-win” pilots with a consolidated, secure architecture that can support LLMs and self-healing networks [111-119][120-138][148-166][184-188][225-236].


A voluntary AI-incident-reporting standard (TEC) was introduced to enable systematic learning from AI failures.


Syed Tausif Abbas outlined the schema (30 fields covering incident type, severity, affected subsystem, etc.) and argued that, although not mandatory, the standard would give operators a common data set for root-cause analysis and help regulators shape AI policy [193-199][200-203][204-209][210-218].


Overall purpose / goal


The panel aimed to explore how AI can be responsibly deployed across telecom operations to build and sustain customer trust, by sharing best-practice use-cases, highlighting the need for collaborative standards and cross-border data sharing, and discussing practical challenges (privacy, cost, governance) that operators, regulators, and standards bodies must jointly address.


Tone of the discussion


– The session opened with a formal, courteous tone (introductions, opening remarks).


– As speakers presented concrete AI applications and the urgency of fraud/scam mitigation, the tone became focused and problem-oriented, yet remained constructive, emphasizing solutions and collaboration.


– When the voluntary standard and cost-optimization topics were raised, the tone shifted to analytical and forward-looking, acknowledging hurdles while expressing optimism about unified platforms and global cooperation.


– The closing remarks returned to a appreciative and collegial tone, thanking participants and reinforcing the collaborative spirit of the forum.


Overall, the conversation maintained a professional and solution-driven atmosphere, moving from introductory formality through detailed technical discussion to a concluding note of mutual respect and shared commitment.


Speakers

Dr. M P Tangirala


Area of expertise: AI in telecom, customer trust, responsible AI


Role / Title: Chair of the panel / Session moderator (as introduced to begin the session)


Anil Kumar Jha


Area of expertise: Telecom regulation, policy advisory


Role / Title: Principal Advisor, Telecom Regulatory Authority of India (TRAI) [S2]


Mr. Julian Gorman


Area of expertise: Telecom industry collaboration, anti-scam initiatives, AI-driven security


Role / Title: Representative, GSMA (Asia-Pacific) – expert in telecom collaboration and scam mitigation [S4]


Syed Tausif Abbas


Area of expertise: Telecom standards, AI incident reporting standards, policy formulation


Role / Title: Senior Deputy Director General (DDG) and Head, Telecom Engineering Centre (TEC); also holding additional charge as CMDTCIL


Dr. Rajkumar Upadhyay


Area of expertise: Telecom AI applications, fraud detection, disaster-management systems, quantum communications


Role / Title: CEO, Centre for Development of Telematics (CDOT) [S8][S9]


Mathan Babu Kasilingam


Area of expertise: AI adoption in telecom service providers, privacy-by-design, AI infrastructure, LLM integration


Role / Title: Senior executive representing a telecom service provider (TSP) – speaker on AI adoption and privacy standards [S11]


Moderator


Area of expertise: Technology security, data privacy, cyber-security governance


Role / Title: Technology Security and Data Privacy Officer, Vodafone India Limited (over 20 years experience) [S12]


Additional speakers:


None. All participants in the discussion are covered by the speakers list above.


Full session reportComprehensive analysis and detailed insights

The session opened with a formal introduction by the moderator, who identified the Technology Security and Data Privacy Officer of Vodafone India and senior DDG S.T. Abbas as panelists and invited the audience to focus on “balancing information, innovation with privacy and trust” before handing over to Dr M P Tangirala to chair the discussion [1-8].


Dr Tangirala set the tone by stressing that AI-driven decisions-whether in outage management, service continuity or grievance handling-directly affect customers and therefore require clear, proactive communication [13-21]. He warned that while AI can dramatically improve fraud-spam reduction, it must be deployed so as to minimise false-positives and fully respect privacy and regulatory constraints [16-18]. A “human-in-the-loop” safeguard was presented as essential to prevent autonomous systems from making unchecked decisions [19-21], and he announced that the panel would hear from experts representing service providers, R&D and the DOT standard-setting body [22-25].


Julian Gorman described the “scam economy” as a threat that moves faster than regulation, noting that scammers are not bound by geography, law or funding limits [26-30]. To counter this, GSMA created the Cross-Sector Any-Scam Task Force, a coalition of more than 39 organisations from 17 countries-including Meta, Google, TikTok and AWS-aimed at identifying and prioritising joint anti-scam initiatives [31-35]. He reported that, within a few months, over 40 operator case studies from the Asia-Pacific region had been collected, demonstrating that operators can develop and implement successful scam-mitigation strategies without waiting for regulation [36-38]. Gorman argued that service-based rules risk stifling future innovation and that regulation should focus on outcomes while fostering industry-wide collaboration, especially as India rises to a global telecom leadership role [39-53].


Dr Rajkumar Upadhyay presented CDOT’s AI portfolio. He began with “Fraud Pro”, a system that groups images and demographic data to detect duplicate SIM registrations-an approach that has already disconnected 7 million fraudulent connections [65-71][73-84]. He also described a digital intelligence platform used by banks to assign a risk score to transaction recipients, enabling financial-risk indicators that block high-risk transfers [81-85]. The crowdsourced “Chakshu”/“Sanchar Sati” app, downloaded by more than 18 million users and generating 25 crore website hits, empowers customers to report unwanted calls and automatically disconnects fraudulent numbers, with 7 million connections removed through user-initiated verification [85-102]. Upadhyay highlighted that the AI platform was used to locate dead bodies after the Balasore train accident, demonstrating AI’s utility beyond telecom services [80-85]. In the domain of public safety, CDOT has built an AI-federated disaster-management platform that aggregates alerts from IMD, CWC and other agencies, uses AI to generate geo-targeted cell-broadcast messages (a technology that sends alerts to all devices in a geographic area), and has reduced cyclone-related deaths in Odisha to zero-a model now being promoted to the UN for global early-warning deployment by 2027 [241-266][270-276].


Mathan Babu Kasilingam outlined his operator’s AI governance framework. The company is certified under ISO 27701 (Personal Information Management System) and follows a “privacy-by-design” approach, positioning privacy as a core pillar of trust [113-119]. He traced AI adoption from the consumerisation of assistants such as Siri and Alexa to enterprise quick-wins, where AI is first applied to a single function (e.g., fraud detection) to demonstrate value [120-138]. He identified two major obstacles: fragmented data silos created by separate AI projects [148-166] and the high cost of infrastructure-80-90 % of AI spend goes to GPUs, storage and compute [225-230]. To address these, the operator is consolidating data into a single AI lake, exposing it through enterprise-wide APIs, and developing purpose-built large language models (LLMs) that can serve multiple business functions while simplifying DPDP (Data Protection and Data Privacy) compliance [184-188][170-179][219-224]. Kasilingam noted that his organisation already records incidents within its ITIL-based processes and sees the TEC schema as complementary to existing practices [193-197].


Syed Tausif Abbas introduced a voluntary AI-incident-reporting schema devised by the TEC standard-setting body. The schema comprises 30 fields covering incident type, severity, affected subsystem, cause and impact (physical, environmental, psychological), with submitter details masked for privacy [193-201][202-207][208-218]. Although the standard is not mandatory, Abbas argued that a common taxonomy will enable operators to analyse failures, refine models and provide regulators with consistent data to shape AI policy [193-197][201-207]. He likened the initiative to the early computer-emergency-response teams, suggesting that a similar mechanism is now needed for AI [195-197].


The subsequent Q&A reinforced several themes. Kasilingam emphasized the value of quick-win projects and discussed the cost pressures of AI infrastructure [225-236]. Upadhyay expanded on the disaster-management system’s scalability and its potential for international adoption [241-266]. Gorman reiterated the need for privacy-enhanced cross-industry data sharing via open-gateway APIs and regulatory sandboxes [285-295]. In response to Mr Jha’s query, he proposed two global steps-cross-border secure data sharing and collective action-and two India-specific steps-domestic anti-scam measures and knowledge export [321-326].


The moderator concluded by thanking the panelists for their “vibrant discussion on responsible AI, standards, the repository and various government apps for enhancing consumer experience,” and announced the presentation of mementos and a group photograph, underscoring the collaborative spirit of the session [328-335].


Key take-aways


– Human-in-the-loop controls and proactive customer communication are essential for trustworthy AI-driven telecom services [13-21].


– Privacy-by-design, demonstrated by ISO 27701 certification, builds customer confidence [113-119].


– A voluntary AI-incident-reporting schema with a 30-field taxonomy can increase transparency and aid regulators [193-201][202-207].


– Cross-sector collaboration-exemplified by GSMA’s Any-Scam Task Force and privacy-enhanced data-sharing sandboxes-is critical to combat scams [31-35][285-295].


– AI-based fraud tools such as “Fraud Pro”, “Sanchar Sati”, and millisecond-level call-blocking have demonstrably reduced fraudulent connections and scam calls [65-71][85-102][307-309].


– The AI-federated disaster-management system that fuses meteorological data and delivers geo-targeted cell-broadcast alerts has achieved zero-casualty outcomes in pilot regions [241-266][270-276].


Action items


– GSMA will continue expanding the Cross-Sector Any-Scam Task Force and its Southeast-Asia proof of concept [31-33].


– CDOT will promote its fraud-prevention suite, disaster-management platform, and the dead-body detection capability for international adoption [65-71][241-266][80-85].


– Kasilingam’s organisation will merge fragmented data repositories into a single AI infrastructure and expose services via enterprise APIs [152-166].


– Abbas will circulate the voluntary incident-reporting schema to encourage uptake [193-201].


– Gorman recommended establishing privacy-enhanced data-sharing sandboxes and regulatory support mechanisms [285-295].


Unresolved issues


– Detailed regulatory frameworks that enable privacy-preserving cross-industry data sharing [285-295].


– Strategies to address the shortage of skilled AI talent within telecoms [225-229].


– Methods to balance aggressive call-blocking with the guarantee of emergency call availability [307-309].


Session transcriptComplete transcript of the session
Moderator

Technology Security and Data Privacy Officer at Vodafone India Limited, with over 20 years of experience in cyber security domain and governance structure. Rounding off the panel, we welcome Mr. S .T. Abbas, Senior DDG and Head TEC, also holding additional charge CMDTCIL, with over 35 years of experience in telecom standards, certifications, spectrum management and network regulation. I would request all the panelists to please come forward for a quick photograph Thank you, sirs. Please take your seats. Let’s engage deeply on how to balance information. Innovation with privacy and trust. I now hand over to Dr. Tangirala Ji to begin the session. Thank you.

Dr. M P Tangirala

Chairman, member, Mr. Mitter, distinguished delegates, my fellow panelists, I welcome everyone to this second session. The clock is already ticking, so I will be brief in my opening remarks because I come between the audience and the distinguished panelists, which I don’t intend to do. The session title is Building Customer Trust Through AI -Driven Operations. The importance of trust was highlighted, among others, by Mr. Shantigram Jagannath as well, when he was speaking about AI through telecom networks and the at -scale problems that we could try and solve. Thank you. Now, while customers may not interact with AI models directly, they are affected by the outcomes of the decisions. And therefore, you know, whether it’s outage management, service continuity, grievance handling, you know, while efficiencies may improve, the responsibility for decision integrity ultimately remains with the telecom service providers.

And clear and proactive communication with the customers would become very important. And that is where, you know, there are impactful applications of AI in telecoms, in spam and fraud prevention, which a person had mentioned in his opening remarks about how 2 .1 million numbers were disconnected using AI -based tracking. But the challenge is also that we need to reduce this spam. while minimizing false positives, avoiding customer inconvenience, and fully respecting privacy and regulatory requirements. So that is always a big concern. Then, of course, this whole issue of the human in the loop or human in the mix. We need this automation to have an element of human control that is so that the system does not run away with its own decisions.

So we have, for all these issues and more, we have eminent speakers here, both from the service providers, from the R &D, and as well as from the standard -setting body of DOT. I will request each of them to give their thoughts, and then maybe a few of you… Both of them have presentations to make. I’ll request them to keep it to about five minutes or so, so that we have time for further discussion. Thank you.

Mr. Julian Gorman

And the reason for it is in the scam economy, regulation cannot move as fast as scammers. Scammers are not bound by geography. They’re not bound by laws. They’re very technically capable and they’re very well funded. They have all the things that mobile operators would like to have. I think it’s important to understand that we have to focus on stimulating innovation. At GSMA, about 12 months ago, we formed a coalition called Cross -Sector Any Scam Task Force. It involves more than 39 organisations from 17 countries, including the social media platforms, so Meta, Google, TikTok, AWS. And the aim was to drive or identify and prioritise initiatives and activities that we could do as an industry to help combat. Now, one of those activities was let’s gather what the industry is doing.

Now, in just the last couple of months, across Asia Pacific, we’ve gathered case studies of more than 40 instances where operators without regulation have developed, implemented, and used successfully some sort of strategy or service to combat scam. And I think that’s an indication, along with GSMA’s globally working with people like Virginia Tech and with our foundry, with our proof of concept around data sharing, is the industry is focused on this. And the danger, of course, of implementing service -based rules is they restrict innovation in the future. And so we really need to focus on outcomes when it comes to regulation. And I think we all universally subscribe to the fact we need to combat scam. We need to work together.

And it’s not just the people in this room. We need to collaborate and work across the ecosystem. to make that possible. I think those principles actually also apply in the broader sort of sense of the term is how do we grow 5G, how do we make 5G meaningful to the whole economy, to all users. It’s about stimulating that ecosystem and making sure that they are using 5G and 4G and mobile broadband into meaningful solutions for the population. And the important thing also for India is India is rising not just economically but also in its position in the telecom world and the GSMA sort of global ecosystem is India is a real telecom superpower and it’s on the rise.

And that means actually it cannot just be worried about its domestic situation. actually it has to embrace that statesman role to be a global leader. And so actually considering cross -border, how does India play its role in a global ecosystem are critical to actually the sustainability and growth of the global ecosystem of which India’s vision is dependent on. It cannot exist alone. And I think it’s important that when we focus on innovation and solving things like scam, it is as part of a global community. It’s not just a national community. And so the actions we take, the innovations we look to stimulate have to be part of that global solution. Thank you.

Dr. M P Tangirala

That was thought -provoking, some of the things that you said about collaborative innovation or innovation through collaboration. We will come to that in a bit when we go for the questions. So, with that now may I request Dr. Rajkumar Upadhyay, CEO of CDOT for his presentation and opening remarks.

Dr. Rajkumar Upadhyay

Respected Chairman, Mr. Lahoti, Mr. Mittal, Mr. Tanglura fellow panelists, industry leaders, policy makers experts, ladies and gentlemen thank you for inviting me here I think in the previous session there was talk about how do you optimize your network how do you self heal your network how do you make correction in the network so I’m not going to talk about that even though we also as India we have developed our own 4G and 5G we used because we were the late comers so we used quite a bit of AI in terms of predicting the faults because a lot of logs are generated by various systems so I’m not going to talk about that I’m going to talk about where is the PPT?

where is the PPT? So I’m going to talk about some use cases which we have developed during last few years. We are CDOT. We were established in 1984, and we had the legacy of developing the rural telecommunication. We work in primarily three to four areas, the mobile wireless, cyber security, information security that is done by quantum. Quantum AI is a horizontal thing and advanced telecom applications. But I will focus on these are our product line, and all of these products actually use AI because AI is so pervasive. Without AI, you cannot function. So all these product lines, whether it is mobile, whether it is cyber security, information security, disaster management applications, are using AI in a big way.

So one of the key application, key product what we have developed is Fraud Pro. What it does, it actually detects the fraudulent connections in the system. I think you may be aware with the cases of Jamtara, Mewat and all these sim factories running. And these sim factories were destroyed by this particular software. What it does, it groups all the images of the same person because if you go and buy 500 sims using same Aadhaar card or same driving license, this is what was happening. So it detects that and it not only matches the images, it also matches the demographics, name, father, name. And sees that whether names and photos are same but names are different. So using, I will come to the number.

I think some number was described in the beginning that how many connections were disconnected using this software. So this is deduplication and finding the, and in fact we developed for telecom, it is being used now, going to be used in driving license, passports, income tax, deduplication, Manrega deduplications. The second one, I think this, it mentions the AI. AI analysis, 86%. 7 crore mobile numbers. and it was very well used even to, you know, find out dead bodies in the Balasore train accident. The first use case of this particular platform was to identify the dead bodies. The second one is financial risk indicator. I think you would have seen in newspapers RBI has mandated the banks to use financial risk indicator.

What it does, if A is transferring money to B, so B, the credentials of B are checked with the platform which we have developed, which we call digital intelligence platform. And the platform returns a figure that this is risky number, is medium risk or low risk. If it is a high risk number, the bank will not happen, let that transition happen. And it has saved a lot of fraud cases currently. And all the banks actually are using this FRI, which is able to tell that the B number where the money is going is a, you know, dangerous number or a well -identified fraudulent number, the money is stopped. The next is the Chakshu. Chakshu is again crowdsourcing platform.

wherein if you get a fraudulent call or a promotional call or any kind of call or fake KYC or faking as police, you can report and using crowdsourcing, we are able to disconnect the number and we are able to take. And this is again using our Sanchar Sati app. Just to bring to the notice of the audience, rarely a government app has a download of 18 million plus. This is 18 million plus downloads are there. The hits in the website of Sanchar Sati are 25 crore. Very rarely you see that. So this shows the popularity of how customers are protected using the AI -based platform. This is again AI -based platform. This Tapcoff and CIR here, I don’t know how many of you have used.

I would request those who have not used, please use it. In Sanchar Sati, you go, you give your mobile number, it will tell you all the connections under your name. using fuzzy logic, fuzzy AI and fuzzy logic. It doesn’t ask you any other detail. And that number also we ask because we want to verify that it is you and the OTP sent. Otherwise, no other details are asked. Just by the detail, we are able to find out how many numbers are there. And just to bring to your notice, 70 lakh connections have been disconnected using this. People have themselves disconnected their numbers because it also tells you, this is not my number, disconnect it. This was a big problem for us.

When we blocked the SIMs in the country, these guys went outside. And they started pumping calls using Indian numbers. It’s spoof call. This technology is available. I can get call from my own number in using this. We were getting 15 million calls per day, 15 million. And now you see, this was a very complex system because when the call hits at the gateway, the system has to decide within millisecond that this call to be let it go through or block it. it has to be decision has to be millisecond and it has to be zero error because no actual call should be blocked and today we are able to do our because the rigorous testing happened with all the operators today we are able to we have totally neutralize this of course they have found another way they are they have taken sims in places like Cambodia Indonesia Myanmar from there they’re calling so again the AI based system is alerting that these are the numbers of that country and we are alerting the governments of that country AI based security solution because cyber is another major area for all of us and somebody was mentioning the AI will do cyber attacks and it’s true we see in our system AI attacking the systems earlier it was human and now it is fully AI so you have to use AI to counter it so the cyber security solution today what we provide is fully AI based so that it can coordinate between various particular solutions disaster management we have used AI you may be aware that India has deployed ITU CAP based disaster management system as well as 3GPP cell broadcast based disaster management system which is implemented across India we use AI here because what happens like IMD is giving me a warning on rain CWC is giving me a warning on flood weather report is coming so we federate all these inputs and using AI you have less than 2 minutes so this I won’t go through NMS of course we use AI to see that how the when the network is likely to be down and this is actually implemented in Bharatnet 1 and 2 it tells you that this is likely to fail this router is misbehaving or this node is misbehaving so that was my last slide so in nutshell what I am saying that the a lot of AI applications are needed for the customer side to protect the customers and we have made India has made a good progress in terms of reducing the frauds reducing the fraudulent connections therefore safeguarding the customers and we will be very happy to take these technologies to any part of the world given it is implemented at India scale thank you so much

Dr. M P Tangirala

thank you Dr. Rupajay that was very interesting the flavor of the kind of R &D that has been done and the apps that have been developed we now move to someone on the panel Mr. Matan Babu Kashi Lingam who is representing the service providers and you carry a lot of burden of customer expectations on your shoulders so do tell us about your thoughts on the topic of today’s thank you thank you

Mathan Babu Kasilingam

So a few things that we have done as a service provider, majority of the topics are touched upon from fraud and cyber security. I’m trying to say the role of AI in terms of establishing trust, our entire whole ecosystem, which is telecom ecosystem, relies primarily on the customer trust. So to ensure that we have given trust journey for our customers and in the means of adopting AI, there are various core secure pillars that we have followed through. Any AI adoption should have the reasonability, reliability, trust, privacy should come to deliver that. So as a TSP, when we have embarked on the journey of AI, that’s the first and foremost core element that we have taken into consideration.

We are one of the TSPs in the country who have taken the journey of privacy since past five plus years now. We are completely certified on PIMS ISO 27701. We are the only TSP in the country who have governed privacy by design and have certified ourselves against that as well. So that is to only ensure that the trust is given back to the customer. Now I will come back on the journey of AI adoption. First thing that has happened is consumerization of AI. AI has been part and parcel of our life since all of us learned about Siri, Alexa. Day to day home we have been living with AI for many, many years now. So the consumerization of AI happened many, many years back.

What happened in enterprises, there came the pressure of adoption of AI in enterprise. In that, the first and foremost thing that we did is we took as applied in consumer. Let us try and adopt it in AI. Enterprises as well. well. Obviously, it has its own benefit. The benefit being it shares quick win, right? So you get to see a first yearly win that you are able to see by deploying AI in your setup. So how enterprises embarked on that journey is you pick and choose one department, one function, one key problem that you are faced with, deploy AI in it, and you see results. So we saw all of these examples. Fraud. It’s a serious problem for the entire country as a whole.

What can we do? Can we leverage AI? And AI is capable of giving me million eyes and million hands in name of a single human operating that, right? So the power of AI came to aid. We are able to today identify fraud. Sir also briefly touched upon cyber security. So as national critical infrastructures as what we are as TSPs, today we are pressed with serious amount of attacks. So India apparently in the past one year has hosted as many mega significant events. whether it is G20, whether it is Mahakum, then there is situational geopolitical tensions that we went by and now we are hosting AI summit. So national critical infrastructures like TSPs are also faced with increased volume of cyber attacks increased volume if I tell would be not 10 times it would be in as many count that I could multiply with that is a quantum of increased cyber attacks that we are seeing now in the cyber field we are also limited with the number of professionals we have.

So the power of AI not just for the attackers as defenders as well we have started leveraging how can we leverage the power of AI to combat them. So we have, so those are quick wins right? Network operations with the advent of 5G we wanted self operating self healing networks. So in various smaller smaller areas where AI can be embarked that we can realize a very very quick business value, enterprises started adopting That’s the first part that we wanted the quick win, we saw the quick win. The challenge that came with that is we started seeing them in piecemeal approach. The data that we were working upon was almost similar. You gather this intelligence information from the same network elements and nodes.

But you started to look upon through different lenses. All that I need to do is to look through different lens. But I started creating individual siloed repository of data. So if you look at corporates today that have embarked AI, you will see as many isolated silos of data created for them. Because each of them want their lens. And for every lens, they didn’t see the data through the lens. They created a total isolation of the data. Second thing that happened is mammoth amount of infrastructure. Anybody who touches AI today talks about GPUs, humongous. Power that is required to run, etc., etc. That again at enterprise. Thank you. it is siloed data, siloed infrastructure that has been taken into account.

So the journey that we are today in is we had the quick wins, we have taken the first few steps, but we are re -looking at from a different standpoint as we see currently. So we have stepped back. Is there data deduplication that can be done today? In lieu of 20, 30 silos that I have created, do I want to create one single repository of this data? Thereby the secure element also becomes easier. If in silo I have to secure everywhere, bring them in one area, I have the ability to secure them well. Can I leverage a common platform infrastructure, which is the AI infrastructure that is required to put the data and then do these work?

We are doing that. So you can still leverage a comprehensive LLMs, individual businesses in variety of functions, I have taken their own purpose -built LLMs, right? Because… You will have a HR function. The provider for HR is a specific, say SAP, would be primarily driven for HR. And surrounding systems which are talking AI would have built on top of it. There will be a self -healing network. The network provider builds an AI -driven system. So there we are now stepping back to see can we build a comprehensive central LLM, which will still deliver the purpose that are required for looking at. So at V, the premise is core infrastructure, put data in comprehensive, expose them through interconnected enterprise API architecture, thereby businesses and users do not have to talk to the data directly.

They talk through the enterprise model, touch the AI infrastructure, and go and reach back the data for various reasons. So it could be to service my service provider. It could be to service my customers. It could be my customer support. Thank you. bridging them. That’s a platform journey that we are doing that. So this consolidation, like I told, privacy is by design. We are able to do the DPDP compliance inclusive, which is minimize the data. Data in one area, we are able to minimize them as appropriate. That’s what I wanted to share with. Thank you.

Dr. M P Tangirala

Thank you. Fascinating. Now we come to Mr. Abbas, who is the senior DDG from TEC standard setting. He’s promised that he will make a different presentation. So over to you, Mr. Abbas.

Syed Tausif Abbas

the name of application what are the technology used what is the purpose like that then what was the impact or harm information with the what was the incident like physical harm environmental property psychological so these things also forms part of the 30 key fields in which the input is to be given for the schema and then some of the information which is to be masked later on so those related to the name of the submitter email and other things related to submitter information which will be redacted later on similarly the taxonomy as earlier I told that it will classify the incident into different categories depending upon the incident type whether it is a subcategory as network description service quality outage or it is a security beach or AI mismanagement or then affected system whether core is affected whether the radio access network is affected whether the edge is affected or IOT components or physical so which part of the network is affected or any application which is related to user is affected and then what is the incident severity whether it is critical high moderate or low so that also will be recorded and cause of failure if it is known to the user otherwise the deployer or the service provider has to enter this what was the cause of the failure so basically this database will give input to the service provider also that they themselves can examine it they can analyze it and then realign their AI related application so that these incidents don’t recur in future so it’s a gradual auto development of their own AI system which will be then error free and gives the best results output so for this is only this standard has been made but it is not going to give any mitigation mechanism or something.

This is to be decided by the deployer who has deployed those AI application and it is not mandatory. Just as a beginning, when the new computer system came, initially there was not much thing but when the incident started then computer emergency response team was proposed and it started working on collecting the data related to the computer incident. So similarly since the AI has already begun, so we should have this mechanism in place so that we can have the AI incident reporting database also available. Thank you so much.

Dr. M P Tangirala

Thank you so much Mr. Abbas. Presenting arguably and congratulations arguably world’s first time that such a standard has been put out. So since we are fresh off with you, I will start with a question for you about what you have just now presented. you said it is not mandatory it is voluntary of course we will see where that journey goes as you said about certain coming in after the computers but can you tell us a little bit more about what value it offers to the telecom service providers if they voluntarily adopt this standard

Syed Tausif Abbas

so telecom service providers they have already started using the AI application in their network optimization network and services to the users orchestration of resources so many things already the AI application has started so if any incident which gives an unintended outcome if it is recorded and reported then it will be in the best interest of the service provider that those incidents are analyzed and then rectified for so that it is not occurring in the future so in this way it is a can be best utilized by the service provider and since the structure of the schema and taxonomy both are given so it will be a same structure compilation of data which every service provider is doing so that will give benefit to the regulator and policy makers to how to go about the AI policy because of those input which we get from those incidents.

Dr. M P Tangirala

So therefore, Mr. Kashi Lingam, would you think it offers a voluntary adoption of this standard offers any benefit to you from the side of a service provider?

Mathan Babu Kasilingam

I think like sir rightly mentioned about incident recording has been not a new phenomenon for at least people who have been in the IT industry. Recording cyber specific incident additionally has also started happening. However, we have tied back to the same ITIL framework that has been there historically followed. So enhanced AI is yet another tool which is landing up creating possibly an outcome. The outcome could be erroneous. It could be an event, incident, bias could be one of the situation that are arising. So as TSPs, individually while we have started doing this internally as we have adopted the journey of AI for us, these are recorded events. But one manner that it helps and supports in the framework as TEC has put across is, yes, it can be streamlined in a manner that the rest of the populace, if they have to refer by, can also be referred.

Because today there are no standalone companies, right? Every company is in the area. They are in the area of digital and IT. They are only doing. work in their own function. If you ask a bank, bank has to tell that I am an IT company in the service of doing banking. That is how it is changed. So IT plays a crucial role and AI will be a supporting arm in that. So this record keeping will make the ability for us to scale our AI and models as appropriately. With the advent that India wants to, and we have already announced three LLMs coming our way already, homegrown, home developed here, a platform like this will possibly help us manage and then refine our models well.

Dr. M P Tangirala

So you mentioned how enterprises are becoming digital first. And you also spoke in your initial remarks about AI for enterprises. So how do you look at this controlling costs of, you know, costs the infra part you did deal with, but how about the costs? Of AI for enterprises? Any thoughts on that?

Mathan Babu Kasilingam

Currently, it is still a significant amount that is being incurred upon. So the cost optimization, a larger chunk of cost optimization comes from the infrastructure as a whole. So about 80 -90 % of the cost to AI goes primarily on the infra in itself, both in store and as well in compute. The rest obviously comes in the skills. So today, while we definitely showcase the world that we have humongous talents that are getting built in the AI area, for an enterprise still to have these skilled engineers to build upon AI is still an adaptable work -in -progress area. So I think in the journey, we are now looking at AI to come in the aid of AI.

So we were in conversation with one of the AI -driven companies yesterday, and the way he highlighted back to us, telling that earlier… the total employee base was 10 ,000. Now there is a refinement and optimization by incorporating AI and thereby there is reduction in employee base. But then if we look at the people who are operating in AI, which was 30 now has gone to 3 ,000. So you cut down here and increase over there. So we were trying to tell them that the true power of AI is actually in making sure that AI is not touched with people, human. So reduction in human by upskilling that as appropriately is an important element for us to do. Thank you.

Dr. M P Tangirala

I’ll come to you, Dr. Upadhyay. You did, you know, I know I cut you off or sort of gave you a time pause there. Could you tell us a little bit more about what you were doing, what you’re doing with respect to disaster management, the application that you spoke about?

Dr. Rajkumar Upadhyay

Disaster, yeah. Yeah. So disaster management, as you know, earlier, how did you? It used to happen. Suppose there is a cyclone in Odisha. A mail will go from IMD to chief secretary. Chief secretary will write to district collector. District collector would, in his best way, try to send the cyclone exactly to come. And we used to have thousands of lives lost, property lost. Today, using AI and the sensors, the system what we have done, this is one unified platform where all the alert generating agencies, IMD, CWC, FRI, DGSE, so all alert generating agencies are connected through APIs, auto. All the telecom operators are connected. All the alert dissemination agencies like SDMAs in the states are connected.

So it is all powerful one system. Now there is an alarm, a sensor alarm comes that there is a cyclone likely to, or rain likely. This is automatically read by the system. It prepares the message and finds out what is the geo -targeted area. Because earlier the problem was, they will put these kind of threats but nothing will happen. So people will take next time very casually. But today it is a geo -targeted system. It will alert only to the people who are in that belt. Suppose there is a cyclone hitting Gopalpur in Odisha it will only alert the people who are likely to be affected much before. And it will tell you also you need to evacuate you need to evacuate.

If you need to evacuate what is the arrangement by the government or you need to stay indoors. So all that happens and it was actually presented in parliament. The death in for example I am taking the case of Odisha where thousands of people died in 99. The death is zero. So what happened that after that we implemented because India is a large country and sometime a large population is to be alerted in some other cases. That time SMS gets delayed. You know SMS is a sequential process. SMS is sent by SMSCs. There was a new technology called cell broadcast where you don’t see the messages common. You don’t send through SMS you just broadcast it. So we developed a technology called Cell Broadcast And it was recently used in Cyclone Montha And how do you use AI?

Because now I am getting Inputs from various agencies I federate it My system federates all these information Using AI, builds one Particular message, finds out what is The right area where it is likely to hit And sends only to those people And the beauty of this system is, earlier there was a system Of group SMS, they will find people who are staying there Now even if you are a foreigner You are available at that particular time there It will pick your number and it will Give you the message. So tsunami Is coming, so We don’t know, people may be from here And there at the beach. So this has A very good, and in fact we have Published a paper in ITU, ITU has taken This as a report So this is going Forward, we feel that this particular System will meet The requirement of early warning for all By UN by 2027 And we are already talking to many countries And soon this solution will be Deployed in few countries which is Thank you.

Thank you.

Dr. M P Tangirala

In fact, in your presentation, you also spoke about fraud pro and so on. But I will, in the interest of time, I’ll move to Mr. Gorman about this fraud and scam. Now, you did in your opening remarks talk about the importance of collaboration across sectors. Also, the opportunities of, you know, of engendering innovation through collaboration for controlling or combating scams. Could you elaborate a bit upon that?

Mr. Julian Gorman

Sure. Thanks. Thanks for your question. I think this builds on actually the last couple of comments. I mean, what we’re talking about here is sharing data between multi -parties through standardized interfaces and then using AI or something or other to produce a good outcome. And all these things are innovations. They’re on the leading edge of something. If I start with the first thing, data sharing through standardized interfaces. standard interfaces, you know, GSMA has open gateway APIs program and that is contributing to providing data points which can be used in assessing risk for transactions. There’s other data that can be shared that could help address scams earlier in the cycle. Example, there’s lots of other data points to do and that’s the proof of concept that GSMA is working on in Southeast Asia, sharing data.

The challenge with doing that is you’re at the borders of regulatory compliance. You’re talking about private information or personal information or maybe not. There’s sometimes debate. But to be effective, you’re talking about being able to measure the risk on a particular individual user by sharing information across multiple parties. That requires some regulatory support, sandboxes or other activities to help find, to develop the innovation that finds the solutions that help combat scams. I think one of the things we need to focus on in industry is how do we create that nurturing environment that permits exploration of data sharing in a privacy -enhanced way. There’s lots of nice new technologies that have the impact while complying with the regulations and the privacy we want to maintain.

But ultimately, from a mobile operator point of view, I would say there’s four pillars in this combating scam thing. The one is the network, making sure the network cannot be manipulated in favor of the scammers. And that’s by CLI spoofing, all that sort of stuff. Let’s cut that out. If you introduce AI, there’s other things you can do on top. The second is what can mobile operators expose to the ecosystem so that the ecosystem can measure and respond to risk? Open gateway APIs is one thing. The POC I talked about before is another. And there may be other things. The third is what can mobile operators provide as services to their customers? in the same way the physical environment you can provide hard hats and things like that there’s things you can help customers and they can choose to acquire or choose to use them of their own choice to help protect them online and the fourth thing is digital skills digital skills historically we’ve considered is a destination in actual fact we now know we’re never going to hit that final point skills are going to continue to adopt and to adapt and it’s critical that we focus on all four pillars and that from a regulatory point of view and ecosystem point of view we’re collaborating so that the data can flow we can try and test things and we overcome the prejudice that may be stopping innovation because there’s an expectation that you can’t do these things so it requires policy makers regulatory to sponsor to nurture these things I mean I can guarantee I work into 90 % of mobile operators in Asia Pacific and I start a sentence with I want to suggest we use consumer data for I won’t get past halfway through the sentence they’ll say nope you can’t do that But in actual fact, if we want to be successful, no single entity, especially no mobile operator, has all the information.

I mean, if a mobile operator arbitrarily starts turning off SIM cards because they think maybe that traffic looks a bit dubious, I mean, you’ve only got to look at the Optus outage in Australia where three or four people died because they couldn’t call emergency services. You don’t want to be taking that action. It requires collaboration, regulatory support and policy support.

Dr. M P Tangirala

Yeah, thank you. Thank you. Network, ecosystem, hard hats and upskilling. I think that’s a good way to end the discussion here on the panel. But we have time for one question. Yes, Mr. Jha. We have less than two minutes.

Anil Kumar Jha

Thank you. Very quick, very brief. The question from Mr. Julian Gorman. as we have said we are under attack may we attack anytime. We have also said that we should align with the global trends in order to combat these fraud and all those things. You have heard our panelists who are the icons in their field of manufacturing and standardization and PSPs. Could you suggest two steps that global leaders should take to align the world with themselves and two steps that India should take to align with the world. Thank you.

Mr. Julian Gorman

I mean two steps globally. So the proof of concept we are trying to do in Southeast Asia is actually prove that data can be shared domestically but across borders also in a safe secure way and has impact on controlling scams. One thing we need to remember with scams all we are doing by taking action against scams is increasing the cost of the business case and if we increase the cost of the business case here then another area becomes more favorable and that could be just different types of scams or it could be different locations and so that leads to the what do we need to do globally. We need to act across borders. We need to act as a collective global community.

GSMA has a program called United Against Scams there will be a lot of things about that in Barcelona but India is obviously taking great action domestically or taking steps domestically sharing that knowledge across borders and being able to share that data across borders is important and so I would leave it at those two points

Dr. M P Tangirala

Thank you, it also gives us pause for thought, maybe as regulators we also need to look at collaborating efforts across regulators because there are again sectoral issues that we need to do and so with that we are now at the end of the session, I would request the audience to give a big round of applause to my panelists who have given us very good insight into the topic at hand and thank you so much

Moderator

Thank you moderator sir and all our distinguished panelists for such a vibrant discussion on usage of responsible AI the standards, the repository, various government app for enhancing consumer experience. Your insights will greatly benefit the overall digital ecosystem. Now I would request Dr. M .P. Tangirala to present mementos to our distinguished speakers as a token of appreciation. First to Mr. Julian Gorman. To Dr. Rajkumar Upadhaya. To Mr. Mathan Babu. To Mr. S .T. Abbas. Now I invite Sri A .K. Jha, Principal Advisor, TRAI to present a memento to the moderator of this session, Dr. M .P. Tangirala as a token of appreciation for moderating such a productive session Thank you so much, sir Now I take this opportunity to invite all the speakers for a group photograph I once again would request Chairman, sir, M .P.

Tangirala, Secretary, sir and all the Principal Advisors to please join the session speakers of this panel for a group photograph Thank you give a huge round of applause to all the panelists for joining us. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (37)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The moderator introduced the Technology Security and Data Privacy Officer of Vodafone India and senior DDG S.T. Abbas as panelists.”

The knowledge base lists the Technology Security and Data Privacy Officer at Vodafone India and senior DDG S.T. Abbas as panel members, confirming their identification in the session [S100].

Confirmedhigh

“AI must be deployed so as to minimise false‑positives and fully respect privacy and regulatory constraints.”

A related statement in the knowledge base stresses that false-positives should be kept very low when deploying AI for fraud detection, supporting the claim [S105].

Additional Contextmedium

“The session highlighted the need for enhanced collaboration among regulators across different sectors.”

The knowledge base notes that Dr Tangirala concluded the session by emphasizing the need for greater collaboration among regulators, providing additional context to the report’s emphasis on cross-sector cooperation [S1].

Additional Contextmedium

“The scam economy moves faster than regulation, prompting GSMA to create the Cross‑Sector Any‑Scam Task Force involving many organisations.”

The knowledge base reports large-scale scam activity (e.g., billions of spam instances and millions of scammers flagged), underscoring the magnitude of the problem that the task force aims to address [S54].

External Sources (110)
S1
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — – Dr. M P Tangirala- Mathan Babu Kasilingam – Mathan Babu Kasilingam- Dr. M P Tangirala – Mr. Julian Gorman- Dr. Rajku…
S2
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — -Anil Kumar Jha: Principal Advisor, TRAI (Telecom Regulatory Authority of India)
S3
WS #93 My Language, My Internet – IDN Assists Next Billion Netusers — – Anil Kumar Jain: Chair of UASG at ICANN, Former CEO of National Internet Exchange of India Anil Kumar Jain: Currently…
S4
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — -Mr. Julian Gorman: Representative from GSMA, expert in telecom industry collaboration and anti-scam initiatives across …
S5
Building Indias Digital and Industrial Future with AI — -Julian Gorman- Head of APAC GSMA
S6
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — – Mathan Babu Kasilingam- Syed Tausif Abbas – Syed Tausif Abbas- Mathan Babu Kasilingam
S7
Final Report — – 12) Russian Federation ʹ H.E. Mr Rashid Ismailov, Deputy Minister of Telecom and Mass Communications. – 13) Viet Nam …
S8
WSIS Prizes 2025 Winner’s Ceremony — – **Rajkumar Upadhyay** – Dr., Representative from Centre for Development of Telematics, India India’s AI and Facial Re…
S9
IndoGerman AI Collaboration Driving Economic Development and Soc — -Dr. Rajkumar Upadhyay- CEO of Center for Development of Telematics (CDOT), expert in telecommunications, quantum commun…
S11
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — – Dr. M P Tangirala- Mathan Babu Kasilingam – Mathan Babu Kasilingam- Dr. M P Tangirala – Mr. Julian Gorman- Dr. Rajku…
S12
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S13
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S14
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S15
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — “human in the loop is a first class feature not a failure point … design the system … transition … to a human”[79]…
S16
Science AI &amp; Innovation_ India–Japan Collaboration Showcase — I think other is I definitely feel. I feel that we cannot discard the human in the loop. I feel like AI has to make. the…
S17
Leaders TalkX: Towards a safer connected world: collaborative strategies to strengthen digital trust and cyber resilience — – Anil Kumar Lahoti- Dimitris Papastergiou Cross-sector coordination is vital for cyber resilience due to interconnecte…
S18
7th edition — Most spam originates from outside a given country. It is a global problem requiring a global solution. There are var…
S19
(Re)-Building Trust Online: A Call to Action | IGF 2023 Launch / Award Event #144 — In summary, the analysis delved into various aspects of the global information ecosystem and its challenges. It highligh…
S20
TradeTech’s Trillion-Dollar Promise — Furthermore, cooperation between nations, the private sector, and civil society is vital for ensuring the development of…
S21
https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-are-building-industrial-innovation-bridges — I think I would say that the mindset change which we have to move towards is a mindset of an ecosystem. Because we can’t…
S22
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — By fostering interaction, investigation, and the exchange of ideas, sandboxes serve as a stepping stone towards implemen…
S23
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — However, if a sandbox had been in place, the measure could have undergone comprehensive testing and analysis, thereby av…
S24
Global telecommunication and AI standards development for all — India has been chosen to host the distinguished World Telecommunication Standardisation Assembly (WTSA 2024), set to tak…
S25
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S26
Advancing Scientific AI with Safety Ethics and Responsibility — So, those things are we are trying to do some assessments from the incident perspective. So, if you go to read the incid…
S27
Secure Finance Risk-Based AI Policy for the Banking Sector — But of course, we need to engage in it. We need to engage in these technologies and build on them. Otherwise, you know, …
S28
AI as critical infrastructure for continuity in public services — Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational an…
S29
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S30
India’s banks encouraged to adopt AI for consumer protection — Indian banks shouldharness AIto improve internal controls and address customer complaints more effectively, according to…
S31
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — The misuses which have been reported so far are basically the use of generic synthetic identities and deepfake documents…
S32
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Amal El Fallah Seghrouchini:Hello, everybody. I am very happy to talk about AI in cybersecurity. And I think that there …
S33
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — “And spanning all of those, I think the most impactful use cases that we have seen, certainly in fraud and scams remedia…
S34
Overcoming policy silos: the next challenge in Internet governance — Stakeholders and policymakers approach the same issues, both at national and global levels, from various angles and poli…
S35
What is it about AI that we need to regulate? — The question of achieving interoperability of data systems and data governance arrangements across different stakeholder…
S36
OECD releases AI Incidents Monitor to address AI challenges with evidence-based policies — The OECD.AI Observatoryreleaseda beta version of the AI Incidents Monitor (AIM). Designed by the OECD.AI Observatory, th…
S37
Harmonizing High-Tech: The role of AI standards as an implementation tool — Philippe Metzger:Thank you, Bilel. Maybe to be as succinct as possible, just would like to mention four areas, which I t…
S38
Importance of Professional standards for AI development and testing — The discussion maintained a serious, professional tone throughout, reflecting the gravity of the subject matter. While c…
S39
Advancing Scientific AI with Safety Ethics and Responsibility — The panelists agreed that safety measures must be systemic rather than purely technical, requiring integration of existi…
S40
Building Scalable AI Through Global South Partnerships — The speakers demonstrated strong consensus on the need for government partnership, South-South collaboration, digital in…
S41
Main Session | Policy Network on Artificial Intelligence — Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests t…
S42
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — High level of consensus across all speakers, with particularly strong alignment between industry and regulatory perspect…
S43
Setting the Rules_ Global AI Standards for Growth and Governance — Very high level of consensus with no significant disagreements identified. This strong alignment across industry, govern…
S44
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — The discussion maintained a consistently optimistic and forward-looking tone throughout. Speakers expressed confidence i…
S45
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — Trust requires an ecosystem approach with partnerships across the value chain Unexpected consensus across telecom, rese…
S46
AI and Cybersecurity  — Humans are involved in the development and operation of technologies
S47
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Angela Coriz: Thank you. I will try to be quick. So I work at Connect Europe. This is a trade association that represent…
S48
WSIS Plus 20 Review: UN General Assembly High-Level Meeting – Comprehensive Summary — Second, trust and safety must be embedded across the digital ecosystem through regulation, accountability and sustained …
S49
Multi-stakeholder Discussion on issues about Generative AI — Thus, collaboration, dialogue, and capacity-building around AI are encouraged. Collaboration is necessary due to the cro…
S50
Strategic Action Plan for Artificial Intelligence — A barrier to AI developments is that developers may not have access to certain data, because it is technically protected…
S51
Interim Report: — 39. There is, today, no shortage of guides, frameworks, and principles on AI governance. Documents have been drafted by …
S52
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — After having generated this path, it also sends out a series of routine legal requests that we require for most investig…
S53
GOVERNING AI FOR HUMANITY — – Service policy dialogues with multi-stakeholder inputs in support of interoperability and policy learning. An initial …
S54
Secure Talk Using AI to Protect Global Communications &amp; Privacy — High level of consensus with significant implications for industry transformation. All speakers agree that traditional a…
S55
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — High level of consensus with significant implications for fraud prevention policy. The alignment across diverse stakehol…
S56
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — We joke that we shouldn’t worry about AI until we figure out AV. So I guess this is a perfect example of that. Thanks fo…
S57
How Trust and Safety Drive Innovation and Sustainable Growth — No, very much so. I mean, the data protection laws apply across the board wherever technology touches personal data. So …
S58
About the Authors — II: A direct corollary of the cost-effectiveness principle is that regulatory policy should be functionality-based, rath…
S59
Enhancing Digital Resilience: Cybersecurity, Data Protection, and Online Safety — The ethical use of data by private companies was discussed, with emphasis on long-term sustainability and integrity in b…
S60
Harmonizing High-Tech: The role of AI standards as an implementation tool — By uniting service quality-focused regulators with companies adept in the creation of service quality key performance in…
S61
Agentic AI in Focus Opportunities Risks and Governance — Enterprise guardrails & risk management Industry favours globally‑recognised, voluntary standards rather than prescript…
S62
Secure Finance Risk-Based AI Policy for the Banking Sector — Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mec…
S63
AI as critical infrastructure for continuity in public services — Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational an…
S64
India’s banks encouraged to adopt AI for consumer protection — Indian banks shouldharness AIto improve internal controls and address customer complaints more effectively, according to…
S65
Employing AI for consumer grievance redressal mechanisms in e-commerce (CUTS) — During the discussion on consumer protection and technology, several key topics were explored. One of the main points ra…
S66
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And how do we demonstrate that the risks have been managed well? And that is where the assurance ecosystem that Rebecca …
S67
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — Julian Gorman from GSMA emphasized that combating scams requires cross-sector collaboration, noting that scammers operat…
S68
Leaders TalkX: Towards a safer connected world: collaborative strategies to strengthen digital trust and cyber resilience — Collaborative approach to tackle scams involves telecom operators, police, prosecutors, cybersecurity agencies and natio…
S69
How .POST powered services build Cyber Resilience within the global Postal and Logistics Sector — International collaboration is essential for combating cross-border postal scam campaigns and sharing threat intelligenc…
S70
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — The misuses which have been reported so far are basically the use of generic synthetic identities and deepfake documents…
S71
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — We joke that we shouldn’t worry about AI until we figure out AV. So I guess this is a perfect example of that. Thanks fo…
S72
India’s UIDAI rolls out AI-enabled biometric deduplication and document verification platform — UIDAI hasdeployedan advanced platform that uses AI-enabled models to improve biometric deduplication, the process of ens…
S73
What is it about AI that we need to regulate? — The question of achieving interoperability of data systems and data governance arrangements across different stakeholder…
S74
A Global AI in Financial Services Survey — – Data fuels AI and allow firms to scale their AI applications. Access to and quality of data remain key hurdles to AI …
S75
Multi-stakeholder Discussion on issues about Generative AI — Melinda Claybaugh:So Melinda, please. Thank you so much. So I want to share some of the AI products and developments tha…
S76
The role of standards in shaping a safe and sustainable AI-driven future — Seizo Onoe:Thank you very much. Good morning, everyone, and very warm welcome to you all. Our discussions at this summit…
S77
OECD releases AI Incidents Monitor to address AI challenges with evidence-based policies — The OECD.AI Observatoryreleaseda beta version of the AI Incidents Monitor (AIM). Designed by the OECD.AI Observatory, th…
S78
Importance of Professional standards for AI development and testing — The discussion maintained a serious, professional tone throughout, reflecting the gravity of the subject matter. While c…
S79
Ad Hoc Consultation: Friday 2nd February, Morning session — During the session, chaired by Mr. Chair, the speaker began by extending greetings to colleagues and esteemed delegates …
S80
Ad Hoc Consultation: Thursday 1st February, Morning session — In a formal and courteous address, the speaker began by respectfully acknowledging the presiding official, Madam Chair, …
S81
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S82
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S84
Open Forum #16 AI and Disinformation Countering the Threats to Democratic Dialogue — The discussion maintained a serious, analytical tone throughout, reflecting the gravity of the subject matter. While spe…
S85
WS #279 AI: Guardian for Critical Infrastructure in Developing World — The tone of the discussion was largely informative and collaborative. Speakers shared insights from their various backgr…
S86
What policy levers can bridge the AI divide? — The discussion maintained a collaborative and optimistic tone throughout, with participants sharing experiences construc…
S87
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S88
Multistakeholder Partnerships for Thriving AI Ecosystems — The tone was constructive and solution-oriented throughout, with speakers building on each other’s points rather than de…
S89
Global AI Policy Framework: International Cooperation and Historical Perspectives — The discussion maintained a constructive and optimistic tone throughout, despite acknowledging significant challenges. S…
S90
Session — The tone was primarily analytical and forward-looking, with the speaker presenting evidence-based predictions while ackn…
S91
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S92
Flexibility 2.0 / Davos 2025 — The panel discussion provided a comprehensive exploration of the gig economy’s impact on the future of work. While ackno…
S93
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion maintained a consistently collaborative and constructive tone throughout. Panelists demonstrated remarkab…
S94
[Parliamentary Session Closing] Closing remarks — The tone of the discussion was formal yet collaborative and appreciative. There was a sense of accomplishment for the wo…
S95
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S96
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S97
Any other business /Adoption of the report/ Closure of the session — In closing, the speaker reiterated steadfast support for the Chairperson, the Secretariat, and the diligent team, emphas…
S98
Launch / Award Event #159 Book Launch Netmundial+10 Statement in the 6 UN Languages — The tone was consistently celebratory, appreciative, and forward-looking throughout the session. Participants expressed …
S99
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The moderator introduces himself at the start of the session, establishing his presence for the audience.
S100
https://dig.watch/event/india-ai-impact-summit-2026/ai-automation-in-telecom_-ensuring-accountability-and-public-trust-india-ai-impact-summit-2026 — Technology Security and Data Privacy Officer at Vodafone India Limited, with over 20 years of experience in cyber securi…
S101
Internet standards and human rights | IGF 2023 WS #460 — Moderator – Sheetal Kumar:Hello, everyone. Good morning. Welcome to this session on Internet Standards and Human Rights….
S102
Opening of the session — relevance of technological innovation and the establishment of new norms to guarantee freedoms and protections online
S103
Main Session 3: Internet Governance and elections: maximising potential for trust and addressing risks — 1. Balancing Innovation and Integrity: Audience: Thank you very much. My name is Maha Abdel Nasser. I’m from the Egy…
S104
High-Level Session 2: Transforming Health: Integrating Innovation and Digital Solutions for Global Well-being — A significant portion of the discussion focused on the challenge of balancing enhanced security with user privacy protec…
S105
Responsible AI in India Leadership Ethics &amp; Global Impact part1_2 — “I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine …
S106
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S107
Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — **Human Control and Oversight**: Despite different approaches, speakers across perspectives emphasized the importance of…
S108
Deepfakes and the AI scam wave eroding trust — Calls for regulation are understandable, but policy has inherent limitations in this space. Deepfakes evolve faster than…
S109
Tackling disinformation in electoral context — Giovani Zagni: Now it’s on, now it works. Okay, thank you for this question, good afternoon and I will answer by makin…
S110
WS #198 Advancing IoT Security, Quantum Encryption &amp; RPKI — Nicolas Fiumarelli: Sofia, thank you so much for your contributions. RPKI can sound very strange for non-technical per…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. M P Tangirala
1 argument115 words per minute929 words482 seconds
Argument 1
Emphasize human‑in‑the‑loop and proactive communication
EXPLANATION
Dr. Tangirala stresses that AI decisions affecting customers must be overseen by humans to prevent autonomous errors, and that telecom providers should communicate clearly and proactively with customers about AI‑driven outcomes.
EVIDENCE
He highlighted the need for a human control element to ensure AI systems do not act independently [19-21] and emphasized that clear, proactive communication with customers is essential for trust [15-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-in-the-loop is described as a primary feature for safe AI deployment and emphasized as essential, aligning with Tangirala’s point [S15]; additional commentary stresses that the human-in-the-loop cannot be discarded and should aid workers, supporting the argument [S16].
MAJOR DISCUSSION POINT
Human oversight and transparent communication
AGREED WITH
Syed Tausif Abbas, Mathan Babu Kasilingam, Julian Gorman
DISAGREED WITH
Mathan Babu Kasilingam
A
Anil Kumar Jha
2 arguments180 words per minute94 words31 seconds
Argument 1
Call for concrete global and Indian actions to align anti‑scam efforts
EXPLANATION
Jha asks the panel to suggest two steps that global leaders and two steps for India should take to harmonise anti‑scam measures worldwide and domestically.
EVIDENCE
He directly requests specific actions for global and Indian alignment in his brief question to the panel [319-320].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Jha’s request for specific steps is reflected in the summit transcript noting his call for global and Indian alignment on scam mitigation [S1]; the global nature of spam and need for cross-border cooperation are highlighted in a discussion of worldwide spam origins and collaborative solutions [S18].
MAJOR DISCUSSION POINT
Actionable steps for anti‑scam alignment
AGREED WITH
Julian Gorman, Syed Tausif Abbas, Dr. Rajkumar Upadhyay, Moderator
Argument 2
Recommend two global steps and two India‑specific steps to harmonise anti‑scam efforts
EXPLANATION
Building on his earlier request, Jha seeks concrete recommendations on how the international community and India can coordinate to combat scams more effectively.
EVIDENCE
His question explicitly asks for two global and two India-specific steps, framing the need for coordinated policy responses [319-320].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same sources that capture Jha’s request provide concrete suggestions for cross-border data sharing and collective action as recommended actions [S1]; the global problem of spam and the need for coordinated policy are discussed in the global anti-scam strategy overview [S18].
MAJOR DISCUSSION POINT
Policy recommendations for scam mitigation
M
Mr. Julian Gorman
5 arguments158 words per minute1349 words510 seconds
Argument 1
Foster collaborative ecosystem and shared standards to sustain trust
EXPLANATION
Gorman argues that trust in telecom can be maintained by encouraging innovation through collaboration across the industry, regulators, and technology platforms, underpinned by shared standards.
EVIDENCE
He describes the need to stimulate innovation, the formation of the Cross-Sector Any Scam Task Force, and the importance of ecosystem-wide collaboration for trust [31-44] and stresses India’s role in a global ecosystem [46-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-sector coordination is identified as vital for cyber resilience and trust, supporting Gorman’s ecosystem view [S17]; an ecosystem mindset and partnership examples underline the need for shared standards [S21]; global cooperation for standards is also highlighted [S20].
MAJOR DISCUSSION POINT
Collaborative innovation and standards
AGREED WITH
Julian Gorman, Syed Tausif Abbas, Mathan Babu Kasilingam, Dr. M P Tangirala, Moderator
Argument 2
Create cross‑sector task forces and share data with platforms like Meta, Google, TikTok to combat scams
EXPLANATION
Gorman outlines the establishment of a multi‑organisation task force that includes major social media and cloud platforms to coordinate anti‑scam initiatives across sectors.
EVIDENCE
He details the Cross-Sector Any Scam Task Force involving Meta, Google, TikTok, AWS and other organisations, aimed at identifying and prioritising industry-wide anti-scam actions [32-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The formation of a cross-sector coalition involving major platforms is described in the summit notes, matching Gorman’s proposal [S1]; the necessity of global data sharing to tackle scams is emphasized in the global anti-scam discussion [S18].
MAJOR DISCUSSION POINT
Cross‑sector cooperation against scams
AGREED WITH
Julian Gorman, Dr. Rajkumar Upadhyay, Mathan Babu Kasilingam
Argument 3
Advocate privacy‑enhanced data sharing via open‑gateway APIs and regulatory sandboxes
EXPLANATION
Gorman promotes the use of standardized APIs and regulatory sandboxes to enable privacy‑preserving data sharing that can improve risk assessment and scam detection.
EVIDENCE
He references GSMA’s open gateway APIs program, the need for privacy-enhanced data sharing, and the role of regulatory sandboxes in fostering innovation while protecting personal data [285-295].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulatory sandboxes are presented as mechanisms to enable responsible data sharing and protect privacy, aligning with Gorman’s suggestion [S22]; further discussion on sandboxes spurring cross-border data sharing reinforces this point [S23]; the use of open-gateway APIs for ecosystem risk assessment is noted in the summit summary [S1].
MAJOR DISCUSSION POINT
Privacy‑preserving data sharing mechanisms
AGREED WITH
Syed Tausif Abbas, Mathan Babu Kasilingam, Julian Gorman, Dr. M P Tangirala
DISAGREED WITH
Julian Gorman, Mathan Babu Kasilingam
Argument 4
Promote cross‑border data sharing and collective action against scams
EXPLANATION
Gorman emphasizes that combating scams requires coordinated action beyond national borders, urging a global community approach.
EVIDENCE
He notes the importance of cross-border collaboration, describing India’s emerging global telecom leadership and the necessity of sharing knowledge worldwide [46-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The global nature of spam and the call for cross-border collaboration are highlighted in the anti-scam global solution overview [S18]; broader cooperation between nations and the private sector for standards supports this view [S20]; India’s emerging global telecom role further underscores the need for worldwide knowledge sharing [S24].
MAJOR DISCUSSION POINT
Cross‑border collaboration on scam mitigation
AGREED WITH
Julian Gorman, Anil Kumar Jha, Syed Tausif Abbas, Dr. Rajkumar Upadhyay, Moderator
Argument 5
Emphasise India’s emerging role as a global telecom leader and the need to share knowledge worldwide
EXPLANATION
Gorman points out that India’s rising stature in telecom obliges it to act as a global leader, sharing its innovations and experiences with the international community.
EVIDENCE
He highlights India’s status as a telecom superpower and its responsibility to play a “statesman” role in the global ecosystem [46-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s selection to host the World Telecommunication Standardisation Assembly and its positioning as a telecom superpower are documented, confirming Gorman’s statement [S24]; the summit notes also reference India’s leadership in global telecom initiatives [S1].
MAJOR DISCUSSION POINT
India’s global telecom leadership
S
Syed Tausif Abbas
3 arguments148 words per minute552 words223 seconds
Argument 1
Introduce voluntary AI incident reporting schema to increase transparency
EXPLANATION
Abbas proposes a voluntary, standardized database for reporting AI incidents, detailing fields, taxonomy, and severity levels to improve transparency and learning.
EVIDENCE
He outlines a 30-field schema, taxonomy, severity classification, and notes that the reporting is voluntary rather than mandatory [193-197].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A 30-field AI incident reporting framework with taxonomy and severity levels is detailed in the AI incident reporting standards document, directly supporting Abbas’s proposal [S26]; broader calls for algorithmic transparency reinforce the need for such voluntary reporting [S25].
MAJOR DISCUSSION POINT
Voluntary AI incident reporting framework
AGREED WITH
Mathan Babu Kasilingam, Julian Gorman, Dr. M P Tangirala
Argument 2
Define a 30‑field schema, taxonomy and severity levels for AI incidents
EXPLANATION
He specifies the structure of the incident reporting database, including categories such as network component, incident type, severity, and cause of failure.
EVIDENCE
The description includes fields for incident type, affected system, severity, and cause of failure, forming a comprehensive taxonomy [193-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The incident reporting schema, including 30 key fields and a detailed taxonomy, is outlined in the standards reference [S26]; the same source notes the classification of incidents across multiple dimensions, matching Abbas’s description.
MAJOR DISCUSSION POINT
Standardized incident taxonomy
Argument 3
Highlight benefits for operators and regulators from a common incident database
EXPLANATION
Abbas argues that a shared incident repository enables operators to analyse failures, improve AI models, and provides regulators with data to shape AI policy.
EVIDENCE
He states that recorded incidents can be analysed by service providers to prevent recurrence and that the aggregated data assists regulators and policymakers [196-197].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The reporting framework is said to enable operators to analyse failures and regulators to shape policy, providing the benefits Abbas cites [S26]; algorithmic transparency discussions also note the value of shared incident data for oversight [S25].
MAJOR DISCUSSION POINT
Operator and regulator benefits
D
Dr. Rajkumar Upadhyay
5 arguments171 words per minute1925 words671 seconds
Argument 1
Deploy AI‑based fraud detection and deduplication tools (Fraud Pro, Sanchar Sati) to disconnect fraudulent SIMs
EXPLANATION
Upadhyay describes AI solutions that identify duplicate or fraudulent SIM registrations and automatically disconnect them, protecting customers from spam and fraud.
EVIDENCE
He explains Fraud Pro’s ability to group images and demographics to detect duplicate SIMs, and cites that 70 lakh (7 million) connections have been disconnected using the Sanchar Sati app [65-71] and [101-103].
MAJOR DISCUSSION POINT
AI‑driven fraud detection and SIM deduplication
AGREED WITH
Julian Gorman, Mathan Babu Kasilingam
Argument 2
Implement millisecond‑level AI call‑blocking to curb spoof and scam calls
EXPLANATION
He details an AI system that decides within milliseconds whether to allow or block a call, handling up to 15 million calls per day with near‑zero error.
EVIDENCE
He notes that the system makes decisions in milliseconds at the gateway, processing 15 million calls daily while maintaining zero false blocks [108-112].
MAJOR DISCUSSION POINT
Real‑time AI call‑blocking
Argument 3
Build a unified AI‑driven early‑warning system that aggregates sensor data from IMD, CWC, etc.
EXPLANATION
Upadhyay presents a platform that integrates data from multiple weather and disaster agencies via APIs, using AI to generate timely alerts.
EVIDENCE
He describes how the system connects IMD, CWC, FRI, DGSE and telecom operators through APIs, automatically reading sensor alarms and preparing geo-targeted messages [250-255].
MAJOR DISCUSSION POINT
Integrated AI early‑warning platform
Argument 4
Use AI to generate geo‑targeted cell‑broadcast alerts, reducing casualties to zero in pilot cases
EXPLANATION
The AI‑powered system creates location‑specific broadcast messages, ensuring only affected populations receive warnings, which has led to zero deaths in recent cyclone pilots.
EVIDENCE
He cites the cyclone-Montha case where geo-targeted cell broadcast alerts resulted in zero fatalities, contrasting with past deaths in 1999 [259-266].
MAJOR DISCUSSION POINT
Geo‑targeted AI alerts for disaster response
Argument 5
Position the solution for international adoption and UN early‑warning goals by 2027
EXPLANATION
Upadhyay notes that the system has been documented in an ITU paper and is being promoted to other countries, aiming to meet UN early‑warning objectives by 2027.
EVIDENCE
He mentions the ITU report, ongoing discussions with multiple countries, and the target of supporting UN early-warning goals by 2027 [274-276].
MAJOR DISCUSSION POINT
Global scaling of AI disaster‑warning solution
AGREED WITH
Julian Gorman, Anil Kumar Jha, Syed Tausif Abbas, Moderator
M
Mathan Babu Kasilingam
4 arguments159 words per minute1696 words637 seconds
Argument 1
Adopt privacy‑by‑design and ISO 27701 certification to assure customers
EXPLANATION
Kasilingam states that their telecom service provider has been certified under ISO 27701 and follows privacy‑by‑design principles to reinforce customer trust in AI deployments.
EVIDENCE
He notes that the company is certified on PIMS ISO 27701 and is the only TSP in the country governing privacy by design, ensuring trust back to customers [117-119].
MAJOR DISCUSSION POINT
Privacy‑by‑design and ISO certification
AGREED WITH
Julian Gorman, Syed Tausif Abbas, Moderator
Argument 2
Identify siloed data repositories as a barrier; propose unified data lake and common AI platform
EXPLANATION
Kasilingam points out that creating isolated data silos hampers AI effectiveness and suggests consolidating data into a single repository with a shared AI infrastructure.
EVIDENCE
He describes the problem of multiple isolated data silos and proposes a single data lake and common AI platform to simplify security and access [152-166].
MAJOR DISCUSSION POINT
Data consolidation for AI
AGREED WITH
Syed Tausif Abbas, Julian Gorman, Dr. M P Tangirala
DISAGREED WITH
Julian Gorman
Argument 3
Note that 80‑90 % of AI spend is on infrastructure; address skill shortages and automation of AI operations
EXPLANATION
He highlights that the majority of AI costs are tied to compute and storage infrastructure, and that there is a shortage of skilled AI engineers, prompting a move toward AI‑assisted operations.
EVIDENCE
He quantifies that 80-90 % of AI expenditure goes to infrastructure and mentions the need for skilled engineers, noting ongoing efforts to let AI aid AI development [225-229].
MAJOR DISCUSSION POINT
Cost structure and skill gaps in AI
DISAGREED WITH
Dr. M P Tangirala
Argument 4
Suggest centralised LLMs and API‑driven architecture to reduce duplication and improve security
EXPLANATION
Kasilingam proposes building a centralised large‑language‑model platform exposed via enterprise APIs, allowing various business functions to access AI without maintaining separate data silos.
EVIDENCE
He outlines the creation of purpose-built LLMs for functions like HR, the use of an enterprise API architecture, and the benefits of a single secure data repository [170-179].
MAJOR DISCUSSION POINT
Centralised LLM and API architecture
M
Moderator
3 arguments48 words per minute290 words355 seconds
Argument 1
Set the discussion context around balancing innovation, privacy and trust
EXPLANATION
The moderator frames the panel by emphasizing the need to balance technological innovation with privacy safeguards and customer trust.
EVIDENCE
Opening remarks ask participants to engage on balancing information, innovation, privacy, and trust [5-7].
MAJOR DISCUSSION POINT
Framing of trust‑innovation‑privacy balance
AGREED WITH
Julian Gorman, Mathan Babu Kasilingam, Syed Tausif Abbas
Argument 2
Reinforce the need for standards to guide responsible AI deployment
EXPLANATION
The moderator underscores the importance of establishing standards that can steer responsible AI use within the telecom sector.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
System-level interventions and the development of global standards for trustworthy information ecosystems are discussed, underscoring the call for AI standards [S19]; cooperation between nations and industry to establish standards further supports this point [S20].
MAJOR DISCUSSION POINT
Call for AI standards
Argument 3
Conclude with a call for regulators to cooperate across sectors for responsible AI
EXPLANATION
In closing, the moderator urges regulatory bodies to work together across different sectors to ensure AI is deployed responsibly.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-sector coordination is highlighted as essential for cyber resilience and responsible AI, aligning with the moderator’s closing appeal [S17]; ecosystem partnership examples illustrate the need for regulator collaboration [S21].
MAJOR DISCUSSION POINT
Regulatory cross‑sector collaboration
AGREED WITH
Julian Gorman, Anil Kumar Jha, Syed Tausif Abbas, Dr. Rajkumar Upadhyay
Agreements
Agreement Points
Broad consensus on the need for collaborative ecosystems and shared standards to build and sustain trust in AI‑driven telecom services
Speakers: Julian Gorman, Syed Tausif Abbas, Mathan Babu Kasilingam, Dr. M P Tangirala, Moderator
Foster collaborative ecosystem and shared standards to sustain trust Introduce voluntary AI incident reporting schema to increase transparency Identify siloed data repositories as a barrier; propose unified data lake and common AI platform Emphasize human‑in‑the‑loop and proactive communication Set the discussion context around balancing innovation, privacy and trust
All speakers highlighted that trust in AI-enabled telecom operations can only be achieved through industry-wide collaboration, common frameworks or databases, and coordinated standards that guide responsible AI use and transparent communication with customers [31-44][193-197][152-166][54-56][5-7].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the broader industry-wide call for collaborative ecosystem approaches and shared AI standards noted in multiple forums, such as the unexpected consensus on ecosystem collaboration in telecom-research-governance dialogues [S45] and the high-level agreement on collaborative solutions across stakeholders [S43]. It also reflects calls for common AI standards at the global level [S41] and support for AI standards exchanges coordinated by ITU, ISO/IEC and IEEE [S53].
Strong agreement on embedding privacy safeguards and privacy‑by‑design in AI deployments
Speakers: Julian Gorman, Mathan Babu Kasilingam, Syed Tausif Abbas, Moderator
Advocate privacy‑enhanced data sharing via open‑gateway APIs and regulatory sandboxes Adopt privacy‑by‑design and ISO 27701 certification to assure customers Masking submitter information in the incident reporting schema Set the discussion context around balancing innovation, privacy and trust
Speakers concurred that AI systems must protect personal data, adopt privacy-by-design principles, and use privacy-preserving data-sharing mechanisms such as sandboxes and masking to maintain user trust [285-295][117-119][193-197][5-7].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on privacy-by-design aligns with the principle highlighted in recent policy discussions on ethical data use, where privacy-by-design was identified as a crucial approach for new technologies [S59], and it resonates with regulatory perspectives that see data protection laws as foundational for trust in AI [S57]. Moreover, the functionality-based regulatory framing that prioritises privacy outcomes supports this stance [S58].
Consensus that AI is a critical tool for detecting and preventing fraud and scam activities
Speakers: Julian Gorman, Dr. Rajkumar Upadhyay, Mathan Babu Kasilingam
Create cross‑sector task forces and share data with platforms like Meta, Google, TikTok to combat scams Deploy AI‑based fraud detection and deduplication tools (Fraud Pro, Sanchar Sati) to disconnect fraudulent SIMs Identify fraud as a serious problem and leverage AI to address it
All three speakers stressed that AI-driven analytics, data sharing and automated tools are essential to identify, block and reduce fraudulent connections and scam calls, thereby protecting customers [26-35][65-71][101-103][134-140].
POLICY CONTEXT (KNOWLEDGE BASE)
This view is consistent with the high-level consensus on fraud prevention across stakeholders at the Day 0 Event on building trust and combating fraud [S55] and with practical examples of AI-driven enforcement using telecom data to combat organized crime [S52].
Shared view on the importance of standardized data sharing and unified platforms for AI operations
Speakers: Syed Tausif Abbas, Mathan Babu Kasilingam, Julian Gorman, Dr. M P Tangirala
Introduce voluntary AI incident reporting schema to increase transparency Identify siloed data repositories as a barrier; propose unified data lake and common AI platform Advocate privacy‑enhanced data sharing via open‑gateway APIs and regulatory sandboxes Emphasize human‑in‑the‑loop and proactive communication
Speakers agreed that establishing common data structures, APIs and a single data lake reduces silos, improves security and enables effective AI governance and incident reporting [193-197][152-166][285-295][54-56].
POLICY CONTEXT (KNOWLEDGE BASE)
The call for standardized data sharing echoes earlier calls for standardized protocols in cross-border AI collaborations [S39] and the push for common AI standards and definitions at the global level [S41]. It is further reinforced by initiatives to establish AI standards exchanges through bodies like ITU, ISO/IEC and IEEE [S53], while recognizing data access barriers such as IP and security classifications that affect sharing [S50].
Agreement on the necessity of cross‑border and global collaboration to address AI‑related challenges, especially scams
Speakers: Julian Gorman, Anil Kumar Jha, Syed Tausif Abbas, Dr. Rajkumar Upadhyay, Moderator
Promote cross‑border data sharing and collective action against scams Call for concrete global and Indian actions to align anti‑scam efforts Highlight benefits for regulators from a common incident database Position the solution for international adoption and UN early‑warning goals by 2027 Conclude with a call for regulators to cooperate across sectors for responsible AI
All participants underscored that effective AI governance, especially for scam mitigation, requires coordinated international policies, data sharing across borders and shared standards to enable global solutions [46-53][319-320][196-197][274-276][46-53].
POLICY CONTEXT (KNOWLEDGE BASE)
Cross-border collaboration has been repeatedly emphasized, including in discussions on standardized data sharing protocols and global cooperation [S39], the advocacy for common AI standards worldwide [S41], and UN-level summaries noting that digital challenges like scams require transnational responses [S48]. Multi-stakeholder dialogues also stress the need for international cooperation on AI governance [S49], and the broader consensus on collaborative ecosystem approaches supports this view [S45].
Similar Viewpoints
Both speakers stress that privacy must be built into data‑sharing mechanisms, using technical safeguards and formal certifications to protect user data while enabling AI innovation [285-295][117-119].
Speakers: Julian Gorman, Mathan Babu Kasilingam
Advocate privacy‑enhanced data sharing via open‑gateway APIs and regulatory sandboxes Adopt privacy‑by‑design and ISO 27701 certification to assure customers
Both advocate the creation of structured, collaborative mechanisms—whether task forces or reporting schemas—to enable systematic sharing of AI‑related incident data for better scam mitigation [26-35][193-197].
Speakers: Julian Gorman, Syed Tausif Abbas
Create cross‑sector task forces and share data with platforms like Meta, Google, TikTok to combat scams Introduce voluntary AI incident reporting schema to increase transparency
Both recognize fraud as a major national issue and propose AI‑driven solutions as essential to detect and prevent fraudulent activities [65-71][101-103][134-140].
Speakers: Dr. Rajkumar Upadhyay, Mathan Babu Kasilingam
Deploy AI‑based fraud detection and deduplication tools (Fraud Pro, Sanchar Sati) to disconnect fraudulent SIMs Identify fraud as a serious problem and leverage AI to address it
Both highlight that trust in AI systems requires not only technical safeguards but also collaborative frameworks and clear communication with stakeholders [54-56][31-44].
Speakers: Dr. M P Tangirala, Julian Gorman
Emphasize human‑in‑the‑loop and proactive communication Foster collaborative ecosystem and shared standards to sustain trust
Unexpected Consensus
Voluntary rather than mandatory AI incident reporting is accepted as beneficial by both a standards body representative and an industry service‑provider
Speakers: Syed Tausif Abbas, Mathan Babu Kasilingam
Introduce voluntary AI incident reporting schema to increase transparency Identify siloed data repositories as a barrier; propose unified data lake and common AI platform
While standards discussions often push for mandatory compliance, both speakers endorse a voluntary reporting approach, seeing it as a practical step to improve transparency and data quality without imposing regulatory burdens [193-197][152-166].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry preference for voluntary, globally-recognised standards over prescriptive mandates is reflected in observations that the sector favours voluntary standards for AI governance [S61], aligning with the broader consensus on collaborative, non-mandatory approaches noted in multi-stakeholder settings [S45].
Overall Assessment

The panel displayed strong convergence on four main themes: collaborative ecosystems and shared standards; privacy‑by‑design and data protection; AI as a core tool for fraud/scam mitigation; and the need for standardized, cross‑border data sharing platforms. These points were repeatedly echoed across industry, standards and policy perspectives.

High consensus – the repeated alignment across diverse speakers indicates a unified direction for responsible AI deployment in telecom, suggesting that future policy and industry initiatives are likely to prioritize collaborative standards, privacy safeguards, and AI‑driven fraud prevention.

Differences
Different Viewpoints
Degree of human involvement in AI‑driven telecom operations
Speakers: Dr. M P Tangirala, Mathan Babu Kasilingam
Emphasize human‑in‑the‑loop and proactive communication Note that 80‑90 % of AI spend is on infrastructure; address skill shortages and automation of AI operations
Dr. Tangirala stresses that AI decisions affecting customers must be overseen by humans to prevent autonomous errors and calls for clear, proactive communication with customers [15-21]. In contrast, Mathan Babu argues that the future of AI lies in reducing human involvement, using AI to automate operations and cut staff while up-skilling a smaller workforce, indicating a push for greater automation and less human oversight [225-229][236-237].
POLICY CONTEXT (KNOWLEDGE BASE)
The role of human oversight in AI systems is highlighted in discussions that humans remain integral to the development and operation of AI-enabled technologies [S46], and in business-focused sessions where the balance between automation and human control was examined [S47].
Preferred model for data governance and sharing in AI‑enabled telecom services
Speakers: Julian Gorman, Mathan Babu Kasilingam
Advocate privacy‑enhanced data sharing via open‑gateway APIs and regulatory sandboxes Identify siloed data repositories as a barrier; propose unified data lake and common AI platform
Gorman promotes privacy-preserving cross-industry data sharing through standardized open-gateway APIs and regulatory sandboxes to improve risk assessment and scam detection, emphasizing ecosystem-wide collaboration and cross-border data flows [285-295][46-53]. Mathan Babu counters by highlighting the problem of isolated data silos within a single operator and proposes consolidating data into a single repository with a shared AI infrastructure and centralised LLMs, focusing on internal consolidation rather than external data exchange [152-166][170-179].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates over data governance models reference concerns about data access restrictions due to IP, trade secrets, or security classifications that can impede AI development [S50], alongside calls for standardized data sharing protocols and AI standards exchanges to facilitate governance [S39, S53].
Unexpected Differences
Voluntary vs potentially mandatory AI incident reporting standards
Speakers: Syed Tausif Abbas, Mathan Babu Kasilingam
Introduce voluntary AI incident reporting schema to increase transparency Identify siloed data repositories as a barrier; propose unified data lake and common AI platform
Abbas explicitly states that the AI incident reporting database is voluntary and not mandatory [193-197]. Kasilingam, while acknowledging the usefulness of incident records, ties them to existing ITIL frameworks and suggests internal consolidation rather than a separate voluntary reporting schema, indicating a preference for integrating incident data into internal processes rather than maintaining a distinct voluntary external database. The divergence between a stand-alone voluntary reporting mechanism and an internal unified data platform was not anticipated given their shared focus on data quality and AI governance.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between voluntary and mandatory reporting mirrors industry arguments favouring voluntary, globally-recognised standards rather than compulsory regulations [S61], a view echoed in broader multi-stakeholder discussions that highlight the benefits of voluntary frameworks [S45].
Regulatory emphasis on privacy‑by‑design versus innovation‑first stance
Speakers: Mathan Babu Kasilingam, Julian Gorman
Adopt privacy‑by‑design and ISO 27701 certification to assure customers Foster collaborative ecosystem and shared standards to sustain trust
Kasilingam highlights strict privacy-by-design compliance and ISO certification as core to maintaining trust [117-119], whereas Gorman warns that overly prescriptive regulation can stifle innovation and calls for outcome-focused, flexible regulatory approaches, including sandboxes [38-40][292-295]. The tension between a strong, certification-driven privacy regime and a more permissive, innovation-centric regulatory posture was not overtly signalled earlier in the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate reflects contrasting policy strands: the privacy-by-design approach championed in ethical data use discussions [S59] and reinforced by data protection law frameworks that support trust [S57], versus functionality-based, innovation-oriented regulatory perspectives that aim to balance privacy with technological advancement [S58].
Overall Assessment

The panel broadly concurs that AI is vital for enhancing trust, fraud mitigation, and service quality in telecom. However, clear fault lines emerge around (i) the extent of human oversight versus full automation, and (ii) the preferred architecture for data governance—external, privacy‑preserving data sharing versus internal data lake consolidation. Additional nuanced tensions appear regarding voluntary incident reporting and the balance between privacy certification and innovation‑friendly regulation.

Moderate disagreement: while all participants share the overarching goal of trustworthy AI‑enabled telecom services, the divergent views on human control, data sharing models, and regulatory balance could lead to fragmented implementation strategies, potentially slowing coordinated progress on industry‑wide standards and trust‑building measures.

Partial Agreements
All speakers agree that AI is essential for building customer trust and combating fraud in telecom, but differ on implementation: Tangirala calls for human oversight, Gorman stresses cross‑sector collaboration and standards, Upadhyay showcases specific AI‑driven fraud tools, while Kasilingam focuses on privacy‑by‑design certifications and internal data governance. The shared goal is trustworthy AI‑enabled services, yet the pathways (human control, ecosystem collaboration, product deployment, privacy certification) diverge [15-21][31-44][65-71][117-119].
Speakers: Dr. M P Tangirala, Julian Gorman, Dr. Rajkumar Upadhyay, Mathan Babu Kasilingam
Emphasize human‑in‑the‑loop and proactive communication Foster collaborative ecosystem and shared standards to sustain trust Deploy AI‑based fraud detection and deduplication tools (Fraud Pro, Sanchar Sati) to disconnect fraudulent SIMs Adopt privacy‑by‑design and ISO 27701 certification to assure customers
Both speakers support the creation of mechanisms that increase transparency and data availability for AI systems. Gorman focuses on technical data sharing through APIs and sandboxes, while Abbas proposes a voluntary incident‑reporting database. They share the objective of better insight into AI outcomes, but differ on the scope (real‑time operational data vs post‑incident reporting) and mandatory nature of the framework [285-295][193-197].
Speakers: Julian Gorman, Syed Tausif Abbas
Advocate privacy‑enhanced data sharing via open‑gateway APIs and regulatory sandboxes Introduce voluntary AI incident reporting schema to increase transparency
Takeaways
Key takeaways
Trust in AI‑driven telecom services requires human‑in‑the‑loop controls and proactive communication with customers. Adopting privacy‑by‑design principles and certifications such as ISO 27701 (PIMS) is essential to assure customers. A voluntary AI incident‑reporting schema with a 30‑field taxonomy can increase transparency and help regulators shape AI policy. Collaboration across the telecom ecosystem, including cross‑sector task forces and data sharing with platforms like Meta, Google, and TikTok, is critical to combat scams. AI‑based fraud detection tools (Fraud Pro, Sanchar Sati, millisecond‑level call‑blocking) have demonstrably reduced fraudulent SIMs and scam calls. Unified, AI‑driven disaster‑management and early‑warning systems can deliver geo‑targeted alerts and have achieved zero‑casualty pilots. Data silos and high infrastructure costs (80‑90 % of AI spend) are major barriers; a consolidated data lake, common AI platform, and centralized LLMs with API‑driven access are proposed solutions. India is emerging as a global telecom leader and must align its domestic anti‑scam actions with international standards and cross‑border cooperation.
Resolutions and action items
GSMA will continue its Cross‑Sector Any Scam Task Force activities and expand the proof‑of‑concept for cross‑border data sharing in Southeast Asia. CDOT (Dr. Upadhyay) will promote its AI‑based fraud‑prevention suite (Fraud Pro, Chakshu, Sanchar Sati) and disaster‑management platform for adoption in other countries. Mathan Babu Kasilingam’s organization will consolidate fragmented data repositories into a single AI‑infrastructure lake, expose services via enterprise APIs, and develop purpose‑built LLMs for various functions. Syed Tausif Abbas will circulate the voluntary AI incident‑reporting schema and encourage telecom operators to adopt it for internal analysis and regulator reporting. Julian Gorman recommends establishing privacy‑enhanced data‑sharing sandboxes and regulatory support mechanisms to enable safe cross‑industry AI collaboration. Anil Kumar Jha’s request was answered with two global steps (cross‑border data sharing, collective global community) and two India‑specific steps (domestic anti‑scam actions and sharing knowledge internationally).
Unresolved issues
How to incentivize or mandate adoption of the AI incident‑reporting standard across all operators. Specific regulatory frameworks and safeguards needed for privacy‑preserving cross‑industry data sharing. Strategies to address the shortage of skilled AI talent within telecom enterprises. Balancing aggressive fraud‑call blocking with the need to guarantee emergency call availability and minimize false positives. Detailed implementation plan and financing model for the proposed unified AI data lake and centralised LLM platform. Mechanisms for continuous global coordination beyond voluntary task forces (e.g., binding agreements, standards enforcement).
Suggested compromises
Maintain human‑in‑the‑loop oversight while deploying AI for large‑scale fraud detection and network self‑healing. Adopt privacy‑by‑design and ISO 27701 certification as baseline requirements, allowing AI innovation to proceed within those safeguards. Introduce the AI incident‑reporting schema on a voluntary basis initially, using industry incentives and regulator endorsement to drive wider uptake. Utilise regulatory sandboxes to test privacy‑enhanced data‑sharing solutions before full‑scale deployment. Implement millisecond‑level call‑blocking with a zero‑error target, complemented by user‑controlled tools (e.g., Sanchar Sati) for post‑hoc verification and correction.
Thought Provoking Comments
The responsibility for decision integrity ultimately remains with the telecom service providers, and clear, proactive communication with customers is essential, especially when AI outcomes affect outage management, fraud prevention, and grievance handling.
Highlights the ethical and accountability dimension of AI deployment, stressing that automation does not absolve providers from responsibility and introduces the need for human‑in‑the‑loop oversight.
Set the tone for the discussion on trust, prompting panelists to frame their AI use‑cases (fraud detection, disaster alerts) around customer transparency and governance rather than pure efficiency.
Speaker: Dr. M P Tangirala
Scammers are not bound by geography or law; regulation cannot move as fast as they do. Hence we formed the Cross‑Sector Anti‑Scam Task Force with 39 organisations from 17 countries to share data and drive outcomes, while ensuring regulation focuses on results, not prescriptive rules.
Introduces the concept that collaborative, cross‑industry coalitions are essential to keep pace with agile threat actors, shifting the conversation from isolated operator actions to a global ecosystem approach.
Triggered a pivot toward discussing data‑sharing standards, cross‑border cooperation, and the role of India as a global telecom leader, influencing subsequent remarks by Dr. Upadhyay and Mr. Abbas about standards and international deployment.
Speaker: Mr. Julian Gorman
Our AI‑driven platform ‘Fraud Pro’ groups images and demographics to detect duplicate SIM registrations, has already disconnected 70 lakh fraudulent connections, and the same technology is being used for disaster management, dead‑body identification, and financial‑risk scoring.
Provides concrete, large‑scale examples of AI delivering public safety and fraud mitigation, illustrating how AI can be repurposed across domains and reinforcing the trust narrative with measurable outcomes.
Shifted the discussion from abstract concerns to tangible results, prompting other speakers (e.g., Mathan Babu Kasilingam) to reference these successes when describing their own AI infrastructure strategies.
Speaker: Dr. Rajkumar Upadhyay
We initially built many siloed AI data repositories for different functions, which created duplication and security overhead. Now we are consolidating into a single, privacy‑by‑design platform with a common LLM, exposing data via enterprise APIs to avoid silos and improve security.
Identifies a common enterprise pitfall—data silos—and proposes a strategic architectural shift toward centralized, privacy‑centric AI infrastructure, adding depth to the conversation about scalability and cost.
Prompted a deeper dive into cost and infrastructure challenges, leading to follow‑up questions about AI expenses and influencing the later discussion on AI‑driven cost optimisation and skill transformation.
Speaker: Mathan Babu Kasilingam
We have drafted a voluntary AI incident‑reporting schema with 30 key fields (including impact type, severity, affected system, cause) to enable service providers to log and analyse AI failures, similar to the early computer emergency response teams.
Introduces a governance framework for AI incidents, moving the conversation toward standardisation, accountability, and the potential for regulatory adoption, which had not been addressed earlier.
Created a turning point where the panel shifted from operational AI use‑cases to the need for systematic reporting and standards, eliciting responses from Mathan Babu Kasilingam about leveraging the standard for model refinement.
Speaker: Syed Tausif Abbas
Combating scams requires four pillars: securing the network, exposing risk data via open APIs, offering protective services to customers (hard‑hats), and continuously upskilling digital talent. Effective data sharing across borders needs regulatory sandboxes and privacy‑enhancing technologies.
Synthesises the discussion into a clear framework, linking technical, regulatory, and human factors, and stresses the necessity of privacy‑preserving data sharing, thereby deepening the analytical layer of the debate.
Re‑oriented the dialogue toward actionable policy recommendations, leading to the final Q&A where global and Indian steps for cross‑border collaboration were explicitly requested and answered.
Speaker: Mr. Julian Gorman
To align globally, we must prove that data can be shared securely across borders (GSMA’s United Against Scams program) and ensure that actions against scams do not inadvertently raise the cost of legitimate services, which could shift fraud to other vectors.
Highlights the unintended consequences of anti‑scam measures and underscores the importance of balanced, coordinated global action, adding nuance to earlier optimism about collaboration.
Provided a concluding perspective that balanced the earlier enthusiasm for collaboration with caution about policy side‑effects, prompting the moderator to close the session on a reflective note.
Speaker: Mr. Julian Gorman (answer to Anil Kumar Jha)
Overall Assessment

The discussion evolved from an introductory concern about AI accountability to a multi‑layered exploration of practical AI applications, infrastructural challenges, and governance mechanisms. Key comments—particularly those introducing cross‑sector collaboration, concrete AI success stories, the pitfalls of data silos, and the proposal of a voluntary incident‑reporting standard—served as turning points that redirected the conversation toward systemic solutions and policy frameworks. Julian Gorman’s coalition narrative and his four‑pillar model acted as the central pivot, linking technical implementations with regulatory and global coordination needs. Collectively, these insights shaped a cohesive narrative that balanced innovation with trust, underscoring the necessity of collaborative standards, centralized AI infrastructure, and continuous skill development to sustain responsible AI in telecom.

Follow-up Questions
What value does the voluntary AI incident reporting standard offer to telecom service providers if they adopt it?
Understanding the benefits will encourage adoption and help providers see how the standard can improve incident analysis and regulatory insight.
Speaker: Dr. M P Tangirala
Do service providers see any benefit from voluntarily adopting the AI incident reporting standard?
Seeks the provider’s perspective on practical advantages of using the standard for internal AI governance and model refinement.
Speaker: Dr. M P Tangirala (directed to Mathan Babu Kasilingam)
How should enterprises control the costs of AI, especially the infrastructure and skill expenses?
Cost is a major barrier; insights are needed on optimization strategies to make AI financially sustainable for telecom operators.
Speaker: Dr. M P Tangirala (directed to Mathan Babu Kasilingam)
Can you provide more details on the disaster‑management application that uses AI to generate geo‑targeted alerts?
Further technical and operational information is required to assess scalability and potential replication in other regions.
Speaker: Dr. M P Tangirala (directed to Dr. Rajkumar Upadhyay)
Could you elaborate on how cross‑sector collaboration can be fostered to combat scams using AI?
Collaboration is essential for data sharing and innovation; clarification is needed on mechanisms, standards, and regulatory support.
Speaker: Dr. M P Tangirala (directed to Mr. Julian Gorman)
What two steps should global leaders take to align the world against AI‑driven scams, and what two steps should India take to align with the world?
Seeks concrete, actionable recommendations for international and national coordination on scam prevention.
Speaker: Anil Kumar Jha (addressed to Mr. Julian Gorman)
Research needed: Development of mitigation mechanisms linked to the AI incident reporting standard.
The standard defines reporting but not remediation; a framework for mitigation would close the loop and improve AI safety.
Speaker: Syed Tausif Abbas
Research needed: Methods for data deduplication and consolidation of siloed AI data across telecom enterprises.
Current siloed repositories hinder security and efficiency; unified data platforms could enhance AI performance and governance.
Speaker: Mathan Babu Kasilingam
Research needed: Cost‑optimization strategies for AI infrastructure in telecom, focusing on hardware, cloud, and skill utilization.
Infrastructure accounts for 80‑90% of AI costs; identifying lower‑cost architectures and skill‑mix models is critical for scalability.
Speaker: Mathan Babu Kasilingam
Research needed: Privacy‑enhanced data‑sharing mechanisms and regulatory sandboxes to enable cross‑border collaboration against scams.
Effective scam mitigation requires sharing personal risk data while complying with privacy laws; new technical‑legal models are required.
Speaker: Julian Gorman
Research needed: Evaluation of the AI‑driven disaster‑management system’s effectiveness and its applicability in other countries.
The system has shown success in India; systematic assessment will support international deployment and standardization.
Speaker: Dr. Rajkumar Upadhyay
Research needed: Design of human‑in‑the‑loop governance models for AI decisions in telecom operations.
Ensuring AI does not act autonomously without oversight is vital for trust and regulatory compliance.
Speaker: Dr. M P Tangirala (implied)
Research needed: Impact assessment of AI‑based fraud detection on false‑positive rates and customer inconvenience.
Balancing fraud reduction with user experience requires quantitative studies on accuracy and user impact.
Speaker: Dr. M P Tangirala (implied)
Research needed: Standardization and industry adoption of a common AI incident taxonomy and schema.
A unified taxonomy would enable consistent reporting, benchmarking, and regulatory analysis across operators.
Speaker: Syed Tausif Abbas and Mathan Babu Kasilingam
Research needed: Effectiveness of crowdsourcing platforms like Chakshu for fraud reporting and mitigation.
Understanding user participation rates, detection speed, and outcome quality will inform scaling of such platforms.
Speaker: Dr. Rajkumar Upadhyay
Research needed: Role of AI in cyber‑security defense versus AI‑powered attacks within telecom networks.
As attackers adopt AI, defenders must develop AI‑driven countermeasures; systematic study is needed to stay ahead of threats.
Speaker: Dr. Rajkumar Upadhyay

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI for Bharat’s Health_ Addressing a Billion Clinical Realities

AI for Bharat’s Health_ Addressing a Billion Clinical Realities

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined the rapid adoption of artificial intelligence in Indian healthcare, using Max Hospital’s digital journey as a case study [1][28]. Abhay explained that Max built a 15-year patient data lake and began embedding digital technology years before AI became a buzzword, enabling real-time analytics such as predictive bed occupancy and automated data capture from clinician apps [14][22-24]. He noted early setbacks, including a search engine that returned no results due to language limitations and difficulties tagging data to ICD-11 standards, which forced the team to rely on costly external solutions [18][42-45]. Vikalp asked whether AI adoption has become a key performance indicator for hospitals and how regulation such as NABH or ABDM influences it [30-32].


Abhay replied that while there is widespread desire for AI, reality brings frequent failures that are treated as learning opportunities, especially in mapping patient data to WHO ICD-11 codes [34-38][40-44]. He emphasized that healthcare demands strict supervision because errors can affect patient safety and data privacy, limiting tolerance for variance [52-57]. According to Abhay, AI is now driven by changing consumer behavior-patients search for providers online-and by low-hanging efficiency gains such as predictive room availability and freeing clinicians from routine history-taking [69-73][80-87]. He illustrated the safety potential with a chest-pain case where an AI-assisted ECG could prompt admission, preventing a missed heart attack, positioning AI as an assistive safety tool rather than a replacement for clinicians [113-118][119-124].


Looking ahead, Abhay argued that India’s youthful demographic will soon create a doctor-shortage, making predictive health, home-care delivery and AI-enabled scaling indispensable [128-138][139-144]. Dr Gupta highlighted the creation of 860 million ABHA IDs and a federated digital infrastructure that can be leveraged for universal health coverage, while noting that policy now addresses both public and private sectors [184-190][312-314]. Nikhil stressed that Indian models must be trained on local data and consider multilingual, voice-first interfaces to avoid bias and to serve rural populations effectively [208-213][218-224].


Jigar pointed out that trust involves accurate results, contextual relevance, and feedback loops, and that voice translation across languages can dramatically improve access [219-224][327-332]. Tanvi added that AI’s personalization can bridge equity gaps, but successful deployment requires education, continuous engagement and transformation beyond a one-time pilot [277-284][288-295]. Padmini warned that AI must be designed with equity in mind, starting with remote-first readiness frameworks to ensure provider trust and inclusive outcomes [319-324].


The discussion concluded that AI adoption is no longer optional but a necessary, carefully supervised strategy to meet India’s future health demands while maintaining trust and ethical standards [128-145][52-57][410-414].


Keypoints


Major discussion points


Current state of AI adoption in hospitals is mixed: enthusiasm meets reality, with many early failures that are viewed as learning opportunities. The speakers note a strong desire for AI but “desire does meet reality” and that “failures… are welcome” ([34-39]). They also stress that AI implementations must be heavily supervised because “patient safety, data privacy… have very little standard deviation” and require “a large extent of supervision” ([52-58]).


Concrete AI use-cases are already improving efficiency and safety, especially in workflow automation and clinical decision support. Examples include predictive bed-availability analytics ([22-23]), automated data capture that reduces clinician time spent on history taking ([24-27]), and AI-assisted ECG triage that can prevent missed heart attacks ([113-118]). These applications are framed as “low-hanging fruits” that enhance efficiency, accessibility, and patient experience ([85-88]).


Institutional readiness hinges on culture, leadership, and trust rather than just technology. The need for a “circumspect” approach, extensive internal learning, and leadership-driven habit change is highlighted ([90-102]). Trust is identified as the most critical factor for both patients and clinicians when adopting AI solutions ([105-108]; [119-124]).


Policy, digital infrastructure, and public-private alignment are essential for scaling AI in India’s health system. The discussion references the ABDM framework, the creation of 860 million ABHA IDs, and the shift in national health policy to address both private and public sectors ([176-188]; [312-314]). Regional perspectives stress the need for equity-focused AI readiness frameworks that start with remote settings and adapt to higher-resource sites ([316-324]).


Future outlook: demographic pressure makes AI a necessity for delivering care at scale, with a focus on predictive health and system-wide transformation over the next 3-5 years. The speakers argue that “adoption… is becoming an absolute necessity” due to an aging population and limited infrastructure, and that “predictive health… before the patient comes to the hospital” will be central ([128-145]).


Overall purpose / goal


The panel aimed to share experiences, challenges, and strategic insights on AI adoption within Indian healthcare-covering technical implementations, operational impacts, cultural and trust issues, policy frameworks, and the long-term vision for scaling AI to meet the country’s growing health-care demand.


Overall tone


The conversation began with optimism about AI’s potential, shifted to a realistic appraisal of setbacks, failures, and the need for rigorous supervision, then moved toward a cautious but hopeful stance emphasizing trust, leadership, and policy alignment. By the end, the tone became forward-looking and collaborative, emphasizing collective responsibility to build an equitable, AI-enabled health ecosystem.


Speakers

Abhay Soi – Leader at Max Healthcare (CEO/MD). Expertise: Healthcare AI integration, digital health transformation.


Vikalp Sahni – Partner / Co‑founder at Eka (AI health‑tech startup) [​S17]​. Expertise: Digital health adoption, AI strategy in hospitals.


Nikhil Dhongari – Health‑tech strategist involved in ABDM implementation and AI model development. Expertise: AI for public‑sector health, federated architectures.


Padmini Vishwanath – Researcher, WHO SEARO. Expertise: Health equity, digital‑health policy, AI ethics and normative guidance [​S3]​.


Dr. Rajendra Pratap Gupta – Advisor to the Health Minister; key architect of ABDM and National Health Policy. Expertise: Health‑policy design, digital health standards, public‑private health integration [​S4]​.


Jigar Halani – Director, Enterprise Solutions Architecture & Engineering, NVIDIA South Asia. Expertise: AI infrastructure, cloud & edge computing for health [​S10]​.


Deepak Tuli – Panel moderator (Eka). Expertise: AI in healthcare, facilitation of multi‑stakeholder discussions [​S12]​.


Tanvi Lall – Analyst, PeoplePlus (initiative of Aikstep). Expertise: AI adoption trends, transformation design for health, education & agriculture [​S19]​.


Announcer – Director IT, National Health Authority (leads Ayushman Bharat Digital Mission). Expertise: National health‑IT architecture, large‑scale digital health programmes [​S20]​.


Audience member 1 – Unnamed participant (questioner). Role/Expertise: Not specified.


Audience member 2 – Dentist pursuing an MBA in analytics. Role/Expertise: Dental practice, health analytics.


Additional speakers: None.


Full session reportComprehensive analysis and detailed insights

The session opened with Abhay Soi welcoming the audience and describing Max Hospital as a micro-cosm of the Indian health system, noting that patient trust underpins its performance and that its occupancy rates are 10-15 % higher than competitors [1-5]. He then outlined Max’s digital foundation: five to six years ago the organisation embedded digital technology at its core, creating a 15-year-spanning patient data lake that now powers real-time electronic medical records [14-15].


Early technical setbacks were recounted. An attempt to build a “closed-loop” system similar to Google’s search failed because the engine could not return results in native Indian languages [18-20]. Mapping historic records to the new ICD-11 taxonomy also proved difficult; off-the-shelf solutions were either ineffective or prohibitively expensive, forcing Max to develop in-house capabilities and accept a high failure rate as part of the learning curve [42-47].


When asked about the broader AI landscape, Abhay highlighted a gap between enthusiasm and reality: “desire does meet reality” only sporadically, and the journey is marked by frequent, welcomed failures that sharpen future outcomes [34-39]. He stressed that health-care tolerates virtually no variance in safety or privacy, demanding extensive supervision for any AI deployment [52-58].


Two principal drivers now push AI forward. First, changing consumer behaviour-patients increasingly search for providers online and evaluate doctors through digital assets, which pressures hospitals to improve ESG scores and online presence [69-74]. Second, low-hanging efficiency gains such as predictive room-availability models and workflow automation are delivering tangible benefits [80-88]. These drivers have made AI an important strategic focus for senior leadership, as Abhay noted when asked whether AI is a priority for CEOs [60-66].


Concrete clinical examples illustrate the safety-first approach. Max uses AI to predict vacant beds, enhancing operational efficiency [22-23]. More critically, an AI-supported ECG interpretation can flag subtle abnormalities that a human cardiologist might miss, prompting admission and potentially averting missed heart attacks [113-124]. Such assistive systems are presented as augmentations rather than replacements for clinicians, reinforcing the need for human oversight [52-58].


Institutional readiness, according to Abhay, hinges on culture and leadership rather than technology alone. He described a “circumspect” stance, a steep learning curve for thousands of staff, and a schedule now filled with meetings with technology vendors-about 30 % of his meetings involve people from a technology background-to evaluate solutions that could improve outcomes [90-102]. This cultural shift aligns with Jigar Halani’s observation that the biggest barrier is a mindset change: moving from scepticism to belief that AI can solve long-standing problems [333-337].


Trust emerged as a recurring theme. Vikalp Sahni warned that patients trust their doctors more than any new gadget, raising the question of how AI can coexist with that trust [105-108]. Abhay responded with the ECG example, arguing that AI can enhance safety without eroding the doctor-patient bond [113-124]. Jigar added that trust is built on accurate, context-aware results and continuous feedback loops [219-237], while Tanvi Lall stressed that education, stakeholder engagement and transformation beyond one-off pilots are essential to embed AI into workflows [286-295][327-332].


Policy and digital infrastructure were highlighted by Dr Rajendra Pratap Gupta, who recounted the evolution of the Ayushman Bharat Digital Mission (ABDM) from a 2014 manifesto idea to a nationwide backbone now supporting 860 million ABHA IDs [184-190]. He noted that the latest National Health Policy explicitly addresses both public and private sectors, signalling a move away from siloed approaches [312-314]. Nikhil Dhungari linked this federated architecture to the need for Indian-specific AI models that avoid bias and can be trained on local data [203-210].


Looking ahead, Abhay argued that AI is becoming an absolute necessity due to India’s demographic dividend: a youthful population will age over the next 15 years, creating a doctor-shortage that cannot be met by expanding physical infrastructure alone [128-138]. He envisages predictive health that intervenes before patients become ill, home-care delivery, and the replication of clinical expertise through AI-enabled tools [139-144]. Padmini Vishwanath reinforced this forward-looking view, noting a shift from purely quantitative metrics to qualitative dimensions such as empathy, dignity and caregiver-patient interaction, especially in palliative-care pilots [353-356].


The panel presented several different viewpoints. Vikalp asked whether AI adoption should be accelerated, implying a “crazy” rush [30-33], while Abhay cautioned that premature rollout without supervision leads to failures and safety risks [34-39][52-58]. Dr Gupta identified unethical prescribing practices as the chief barrier to AI uptake [407-414], whereas Abhay focused on technical challenges and the need for supervised AI [52-59]. A further contrast emerged between Padmini’s call for qualitative outcome measurement [353-356] and the efficiency-oriented narratives of Abhay and Tanvi, who highlighted predictive analytics and workflow gains [22-23][84-88]. Finally, Vikalp stressed that patient-doctor trust must precede AI, while Jigar argued that trust can be earned through model accuracy and feedback loops [105-108][219-237].


Potential next steps discussed by the panel


– Treat AI adoption as a formal strategic priority, with senior leaders allocating resources and oversight [60-66].


– Mandate electronic capture of clinical data to feed AI pipelines, building on Max’s data-lake approach [14-15].


– Prioritise safety-critical AI pilots (e.g., AI-supported ECG interpretation) before scaling to efficiency use-cases [113-124].


– Develop voice-first, multilingual solutions to address language barriers identified in early closed-loop attempts [18-20].


– Foster public-private data-sharing partnerships to expand the ABDM ecosystem and enable Indian-centric models [184-190][203-210].


– Launch ongoing AI literacy programmes for clinicians, nurses and administrators, echoing Tanvi’s emphasis on education [286-295].


Unresolved issues remain around definitive regulatory standards for AI validation, the optimal balance between cloud and edge deployment for cost, latency and data sovereignty, and mechanisms to enforce ethical prescribing through AI-enabled monitoring.


In conclusion, the panel converged on the view that AI is indispensable for meeting India’s future health-care demand, but its success depends on a coordinated ecosystem that blends strong data infrastructure, rigorous governance, trust-building cultural change and policy frameworks that keep pace with technological advances. The discussion charted a roadmap from early, supervised safety tools toward a broader, equity-focused transformation of the Indian health system.


Session transcriptComplete transcript of the session
Abhay Soi

Thank you very much for having me here. at this very, very prestigious event. I just came in from Mumbai in the morning, and what I see over here is, I mean, I think it seems to be the microcosm of the globe, in fact. So thank you very much. Yes, I think, you know, I take all these compliments on behalf of Max, and I think it starts and ends with the trust which is sort of reposed by patients at our hospital system. Today, our occupancy is at least 10 % to 15 % better than the next best player in the hospital system. And, you know, one of the things that I want to point out is, you know, AI seems to be sort of the buzzword, of course, today.

But five or six years ago, when we started our journey, we started bringing digital technology at that point in time to the core. And what you see today, what you experience, and, you know, you mentioned better outcomes, and perhaps… patient services. But that is what you experience. What you don’t see is the technology behind it. And I think that is the true test of technology, and that will be the true test of AI as well. When you don’t interface with technology, but the experiences are improved. Having said that, I think, you know, like I said, we started this journey a few years ago. We started by creating a common -size data lake for all the patients which have been through our doors over the last 15 years, and which are doing so on a real -time basis today.

Having said that, you know, these were our attempts. We tried to sort of create a closed -loop system, like Google, so to say, for our doctors and our patients. But, you know, we, like many people, faced very early, very big setbacks because we didn’t have the technology. Because when we used to do search results, we used to get zero results. in the search engine because it wasn’t sort of native to the language, and that’s stuff that we’ve been playing with. But having said that, I think the early days of AI are going to impact tasks rather than, although one is moving towards institutional, adopting it from an ecosystem standpoint, from inculcating it within the institution, so it becomes an intrinsic part of the institution.

But I think today it’s affecting our tasks. It’s affecting tasks of efficiency. You know, we’ve already started doing predictive analysis of beds which are vacant and available and so on. It’s working on safety measures. I think though the early sort of wins that we have, especially with respect to patient satisfaction of the risks and so on, I think clinical support, you know, it’s data collection, a lot of… time by clinicians was being spent in the past to collect data. now a lot of that data is being collected through forms which are in our apps today. And you can speak to them. It kind of collates in a particular manner. So the clinician actually spends less time in perhaps gathering history than in providing a little more value of the value chain.

Vikalp Sahni

Great. And I think, Max, I mean, as you mentioned, there is this data lake that you have created is quite ahead in terms of digital adoption. And I’m sure when you would be starting, and this is a term that we use and see quite a lot, that adoption of digital in a hospital. So is that what is also happening on top of this large data lake and the EMR solutions that you have created on, for Max, is there like a AI adoption wave that is happening? And do you think, like, when the digital adoption happened, things such as NABH, ABDM, many of these things started coming up, talking about policy, talking about regulation. In AI adoption, are there any challenges, any things that you see that can help in this adoption to be much more faster?

Or you see that people are just going all crazy on getting AI adopted in the hospital settings?

Abhay Soi

I think, you know, there is a desire all across, you know. But having said that, desire does meet reality. I would say more than occasionally. And that comes in the form of failures, which are welcome, sort of. I mean, we quite welcome it, actually, because the more you try, the more you will fail, and the more you will sort of have better outcomes coming into the future as well. So we’ve had a lot of failures, I can tell you. You know, whether it is the longitudinal data of. Patient or. looking at, you know, ICD -11 norms, you know, tagging our data with respect to the WHO. I mean, we’ve been failing left, right, and center.

We’ve been reasonably successful as far as ICD -10 is concerned, but I think 11, you know, most sort of layers that are available in the market don’t work. The ones which work are very, very expensive. You know, so we’ve started, you know, we’ve been in -housing a lot of this. So we’re toying around with it. I have no doubt that the speed at which we are failing and the amount of failures that we have, shortly we will run out of all excuses and failures, and it will be like Edison, right? You would have found out every way to fail, and I think perhaps the only way to succeed will be in front of us. So, yes, there is a lot of enthusiasm, you know, towards adopting it.

We see this as a future. I think everybody does. There is, we, of course, have to be very, very careful. Because unlike, you know, let’s say something like education, where if you’re imparting. perhaps incorrect information, you know, it can be resolved. But this is healthcare. I think patient safety, data privacy, these things are right up there. We have very, very little, you know, standard deviation possible in what we do. And so it requires a large extent of supervision, I would say. And perhaps it will continue to for years to come. Although it makes life easy for most, but, you know, at least from a clinical prescription outcome, it will require a lot more supervision to come as well.

Vikalp Sahni

Sorry. So is it now the priority for hospitals? For example, when the digitization adoption happened, it used to be a priority for CEOs that, okay, you have to make sure that all the billings are online. You have to have all the JCIA. You have to have all the discussions and UHS. ID created and so on and so forth. Is this a priority today, adopting AI at hospitals and as a KRAs for your CEOs or operators? Is that what, has AI reached that level today at hospitals?

Abhay Soi

No, I think clearly, clearly it has. And it’s really out of, I think, two sort of drivers that I find at least, and you know, this is at a very, very, at the outset level. I think one is the way the world, the consumer behavior itself is changing. I mean, a lot of the searches used to happen on Google, a lot of the searches now are happening through different platforms altogether. And the way they sort of seek, whether it’s their thought about your website or, you know, when they’re looking at, if you simply ask which is the best cardiologist in Delhi, you know, there’s a different way of people reading into that early and there’s a different way of people.

So you have to, whether it’s your collateral, whether it’s your digital. assets and so on and so forth. You have to make those changes. Information, I mean, if I look at ESG, if I look at investor ESG, how do I improve my ESG score? I mean, it’s like an encyclopedia out there, right? You just ask it any question, it tells you to do so. How should I present my annual report? How should I present myself? I think, you know, pretty much, you know, it is intrinsic to now everything that we do from that standpoint. Second is how can I use it as a tool to improve efficiency? And these are low -hanging fruits. I’m talking about the low -hanging fruits before we even, I mean, kind of, you know, absorb the entire ecosystem or create that ecosystem or participate in that ecosystem.

Make it a part of institutional habits, right? I think even prior to that, when we’re looking at it as a task stage, you know, how is it that I can improve? Now, you know, if I have a particular waiting for my patients, okay, how can I do predictive analysis of room availability? When I’m looking at discharge, how can I sort of this thing? When it comes to patient summary, how do I get, how do I unlock the time that my doctor spent on patient sort of history and so on and so forth. That all improves efficiency all improves outcomes see eventually the lens we are looking at it from is efficiency is accessibility is safety, is clinical support and finally to the experience I mean it’s quite a bit of breadth that you’re looking at AI

Vikalp Sahni

but a little bit more on generic terms everybody says that technology is moving very fast or things are changing so fast AI is also changing so fast and we also keep doing that like we want our businesses our operations, our sales also run very fast is more your internal feeling how are you feeling? how are you feeling? how are you feeling? health institutions moving as fast as the technology is moving even people in the organization be it doctors be it nurse staff all of them are looking at it and a lot of this is about India AI Summit as well because government is looking for educating people on how fast things are changing and we should all be ready for it so what are your views on your institutional readiness people in your institute and in general on this whole AI moving fast

Abhay Soi

so I think first and foremost also depends on you know the institutional culture we are very clear about one thing that we have to be more sort of circumspect about it than anything else we must go up the learning curve as far as AI is concerned things are changing very very fast we have close to 43 3000 healthcare workers which provide healthcare. You know, that means there are thousands, if not hundreds and thousands of work processes. For us to adopt AI in any task means, you know, you have to change huge amount of attendant work processes, even if this layer sits on top. And having said that, okay, is there something else which is better out there?

Is there something which will disrupt this further and so on? Should we wait for something to be adopted by and large to see, sorry, to see what the efficacy of that is and see what the, you know, see once it’s sort of established before adopting it. To me, look, having the first mover advantage in this is not going to do anything. But getting it right is. I think because we can’t afford to get it wrong. These are human lives, these are people. So I think there’s a huge amount of learning within the organization which is happening and I meet you know phenomenal people people across the board, okay, for various aspects. I think since the morning of today, if I look at my sort of schedule, 30 % of my meetings would be people, you know, from a technology background, pitching various sort of applications where our lives can be improved and outcomes can be improved and efficiency can be improved and so on and so forth.

But, you know, at the very least, you have to be very, very circumspect about what you’re going to adopt and what you’re going to roll out.

Vikalp Sahni

No, and I think you touched a very important point that we learned at Eka. We were earlier did a travel startup, me and Deepak. What we realized that in health, there is obviously innovation that people are looking for, but trust is the most important thing. I can bring a cool idea or there could be a cool way of doing a diagnosis at a clinic, but I as a person would trust only the doctor that I have spoken to or I have spoken to. been talked about. So there is this and that’s the reality that we learned when we started doing health that yes, innovation is definitely important, but trust is key. And I think Max has been trusted over the years.

And to be very honest, we also don’t know how to balance that out. The trust that has been created for institutions, for doctors, and now these technologies that are coming in, where it asks questions to the patients and gives relevant next suggestions. This trust factor is kind of getting a little sort of changed. Any views that you have, especially when it comes to patients, people trusting doctors to AI to institutes, any change that you see, and even doctors looking for AI solution and whether they feel that this is right now not as good?

Abhay Soi

So, you know, I’ll give you one example. At most hospitals, at least once every couple of months, you will have a patient who will come in, okay, with a pain in the chest. You do the ECG, and the ECG to the doctor seems sort of normal. He speaks to the cardiologist, okay, and the cardiologist says, okay, I don’t see anything wrong with it. And the patient is sent home, and he has a heart attack, right? Because ECGs, although they’re extremely, extremely common, okay, can be very, very nuanced. So, an expert cardiologist, okay, may be able to catch a particular movement there, okay, while somebody else may not. But even the expert cardiologist on a good day may be able to catch a particular movement.

able to catch it on a bad day not right now I’m not saying AI in its sort of this thing is complete but when a patient comes to ER okay I think it’s absolutely necessary to use that tool okay because that tool says requires admission okay whether the patient doctor sees it on admit him okay look by the end of the day you may admit 150 instead of 100 actual patients but don’t let that one go I think that’s the important thing you’re able to if you’re able to use this as assistive tool to augment your capabilities okay and I think that is what is emerging today you know I think it’s little too far out to say whether it will replace the clinician or not okay but I think right now clearly that is a very very essential tool that you can use and let’s start with safety before we go to efficiency or anything else you know so I think a very simple example like this okay and it depends it starts with leadership moves to institutional sort of habits okay to be able to adopt something like that change your work processes because the umpteen amount of work processes which have to change okay doctor when a patient comes where do you move when this sort of ecg report you move him to the cath lab okay and which is a 13 minute sort of this thing but that’s also preparatory time right you’re doing it within the golden hours you sort of move him into uh you know the icu how do you sort of interact with the doctor you have to call the doctor let’s say it happens at three o ‘clock at night okay the doctor the cardiologist has to come from his home and so on and so forth so the entire dance starts right okay but you have to make sure that you know you you can use this tool to err on the side of caution but i think at the very least that’s what you need to

Vikalp Sahni

and i think you touched upon um like these this complex healthcare process and uh when we look at it from a technology perspective this is what ai can solve for these extremely complex process that today there’s a multiple human touch point very simple such as doing an emergency call to a specific doctor with giving all the respective which today can be optimized, which can save lives. So that’s the sort of things that we keep discussing about during our board meetings and discussions. But a lot of these, and maybe health and non -health as well, what’s your view how the next five years, next six years, yesterday there was this conversation with Sam where AGI will be there by 2028.

What is your view on how next three years to five years? Now we can’t even say a decade, right? It seems like we don’t know what all will happen in a decade. But how do you see next three to five years changing in your hospitals or in general health care?

Abhay Soi

I think dramatically. Adoption of, and it’s not because hospitals or health care providers desire it, I think it’s becoming… absolute necessity for the country. One of the things, and perhaps one of the major things that propels our country forward is the demographic dividend. You know, the average age is 29, 28, 29, whatever. But make no mistake, 15 years down the line, it will be very, very close to the European age. And that’s the time people will require medical intervention. There just isn’t enough infrastructure and doctors available in the country. Okay, barely, barely sort of, it’s actually not even enough for the population today. I can certainly tell you 15 years down the line, there isn’t enough infrastructure which can possibly be built.

There isn’t enough money over here. Or, I mean, we’re just behind the curve a little too much, right? And if we have to solve this equation as far as healthcare is concerned, you know, you have no choice. But you have to, it has to be about predictive health. It has to be about, you know, sort of, before even patient comes to the hospital, falls sick, to be able to predict that he’s going to fall sick. and make amends there. Reaching out to people, unclog the hospital infrastructure, home care and so on and so forth. Okay, be able to replicate capabilities, skill sets of doctors to be able to take them to patients and so on.

I think all of that is a necessity. Without that, we will fail the future generation. So there’s no question of us. I think, you know, this is here. I mean, the future is here today.

Vikalp Sahni

And especially the whole vision of making India a developed country, we have to leapfrog. And many of these technologies can help us in leapfrogging the way you were explaining. But thank you so very much, Abhay, for your deep insights. I think we all love Max and the kind of work that you are doing. And we see more and more AI coming together at Max to solve for doctors, patients, and all of us. Thank you very much.

Abhay Soi

That is entirely mine. Thank you. Thank you so much. Thank you. Thank you.

Announcer

as Director IT at the National Health Authority where he leads the technical architecture and implementation of flagship national initiatives including the Ayushman Bharat Digital Machine and Ayushman Bharat PMG. We welcome you sir. We have with us Ms. Padmini Vishwanath, Researcher at the WHO SEARO, Southeast Asian Regional Office, bringing a regional lens to health equity, digital health policy and evidence -based transformation in low and middle income countries. We welcome you Ms. Padmini. And last but not the least, we have Mr. Jigarth Halani, Director, Enterprise Solutions Architecture and Engineering at NVIDIA South Asia, a 20 -year technology veteran driving innovation in supercomputing, big data and AI infrastructure and a trusted advisor to government and industry on AI strategy.

I now hand over to Duy. Deepak to lead the panel discussion. I think we are short of space, so I’ll manage to standing high.

Deepak Tuli

Thank you very much. It was a great session, Vikalp and Abhay just left. We have a short of time, so I will try to leave maybe 5 -10 minutes at the end for everyone to have questions. I would like to start this session with Dr. Gupta. Dr. Gupta, we were talking last night. You were instrumental in defining the whole first white paper around ABDM, how did it all started. There is obviously a lot of progress from when you conceptualized way back in 2019 -2020 to today. What do you really think has really worked towards seeing the reality and what are the challenges? How do you see going forward? How do this whole documentation moving from? Between patient and interoperability will start impacting the clinical decision making for the physician.

going forward.

Dr. Rajendra Pratap Gupta

Thank you Deepak and thank you Vikal for this wonderful session and giving me the opportunity. So it started actually in 2014. It was in BJP’s manifesto where I wrote and then in National Health Policy in 2016 and eventually when I was advisor to Health Minister. So you know, firstly we should compliment the ABDM team. There is no precedent. There is no precedent to create records for a billion population. How do you go about doing it? But I think people like Vikal and you every time you know you take a bold step, there are nurses who will say Bijli nahi aati, aap kese karoge. Today we have 860 million ABHA IDs. So I think if I look at the reality today and I know I am sitting on the right of the Director IT.

We have created the digital infrastructure. Now we have to leverage that to empower the people who are going to use it. I see a future where we will not have people using multiple schemes. That was our biggest problem internally. I can tell you why this got you know created and there are more reasons to but eventually technology will allow us to optimally use resources to clinically be precise in treating people remove redundancies and also my boss who is still the unit health minister we agreed fundamentally it will be tough to send doctors to relay they study for 12 years to make their life better not to go of course we want them it will take time to have that infrastructure where they can stay in rural areas but we believe that digital health digital solutions will be able to leverage this backbone that we have created to serve people in the areas where they need them the most I think that golden hour to platinum minutes to I think finally what I believe will be the digital health standards that in a minute you could get to what you need at least for primary care so I am very optimistic and we call was right the decade is not we used to talk decade at 2013 -14 now we talk three years max few months is better to talk So I think it’s a time where we should be really optimistic of the vision that we were able to build thanks to people like who had implemented.

You know, we had COVID at our hand, you know, when we looked at 2 .2 billion people, you know, getting vaccinated, not calling up people, just going to the app and getting it done. So I think the creators are in the room. The implementers are in the room. So ideators don’t need to worry much. Thank you.

Deepak Tuli

Thank you very much. That’s very nice. So moving to your left, Nikhil. Nikhil, you guys have done a phenomenal job in, you know, deploying adoption of ABDM in public sector today. But we see a lack in the private sector, definitely. What have you been learning? How do you think this whole learning from ABDM deploying at a large scale PSU where obviously there is obviously massive, massive load of patients walking in and very limited physician and staff to support. Digitizing appointment has done a great way. do you see it going forward moving into private sector how do you see it going forward getting into the workflows even deeper which will really help better outcomes

Nikhil Dhongari

which can be developed by IKAK and other health startups. Where ABDM created the federated architecture, where the model can go there and they can be tried. Because the simple making algorithm doesn’t make a solution in the health sector. Just now as the max safety is very much important. So what is missing in the foreign models, which is not tried in the Indian data, especially we can’t neglect the population, the rural population, small hospitals where most of the people go there. Where ABDM created HMI solutions, where we have access to the longitude records of the patients, where our Indian models can be tried, and where we can get the success actually. The ground is fertile enough right now to pitch in for the Indian startups to come in and try.

in your models especially because of federal drugs you don’t need LLMs you just need SLMs and some random models to come and do it so that the model cannot be biased because the biasness I can’t see only from the technical angle here you have to keep both clinicians and technologies so that the context data is available from across the India and across the population where the subject is the billion clinical realities and where and the AI model should be not only transactional they should be conversational where the literacy rate is very low so now is a fertile ground for the Indian startups to come in and show the brand value of the Indian startups so where I can see it.

Thank you.

Deepak Tuli

Thank you very much Nikhil insightful you touched upon cloud infrastructure and we have Jigar here. So Jigar cloud infrastructure has made AI scale. We all are using, everyone is using chat GPT today. Infrastructure, sovereignty and trust are hot things. We’ve been hearing about these words for last five days like I don’t know n number of times. It becomes super, super relevant for health as we heard in the last session, the trust. How do you think the models or the companies building in India bring that trust factor so that physicians and the operator like Abhay would trust the solutions and then start implementing which will really help, you know, people like Nikhil in building those models for the country.

Jigar Halani

So I think it’s a deep question. Trust has many aspects if you ask me honestly, right. Trust in my language could be the most accurate results and I’m happy because I’m a fast moving guy, IT professional, right. So we are known for it. The event gets over today and Monday we are going to do it. We are back to work and we know we are going to hog again for next five days to make sure that we are something better and bigger right trust for my mom could be a very different storyline right because for her everything on priority line is health nothing else essentially right for me plus or minus 1 % 3 % 5 % 10 % is also okay for her nothing is right and trust for a mother who has just a newborn in her hand it’s gonna be completely different right so it I personally feel it has many years but a fundamental layer if you ask me honestly model builders what they’re trying to do is trying to still accumulate the knowledge which is still available on the web right what we haven’t gone back and that’s where I would borrow if India is achieving these numbers which I was not aware I knew about pretty well I have it from myself as well although I have not used it yet but a but I am a registered user I won’t myself get enrolled into everything but I am a registered user I won’t myself get enrolled into everything scheme that government is coming of it just to make sure that to understand where all connectivities possible essentially right but I’m not even a willing one yeah but but but I’m saying that once we have this data how do we make use of this data better so that I bring not just the context of India which is so important and what a couple is trying to do just on the language side of the story which we all understand that language is so important to us but imagine the you know the the the the environmental changes that I have from you know place to place and basis which the changes that I have in my body structure and basis which what medicine helps me better and so on and so forth right it has its own subsequent you know chain of things that that is it how do I bring that data more into the ecosystem thereby I make those model more and more efficient better and in the lingo of what India understand not just in language but also in the lingo of health which is important acting that particular like for example I come from Gujarat but I stay in back Bangalore, I know for sure that environment is not suitable to me, right?

And I keep sneezing for the poll reason, of course, many of the moment I go back to Gujarat, I’m absolutely normal, right? Whether it is extreme cold, whether it is extreme hot, it’s raining, doesn’t matter at all. I never sneeze. I think things work in Gujarat. I go, I come daily, I don’t get sneezing at all. I’m just another example, right? So how do I bring that data more into the ecosystem, number one? And then number two, how do I train those models more efficiently and serve them back to the users? So that’s one aspect of it, right? The second aspect of it is I think unlike language, in the healthcare, we need a very large momentum of citizens to participate and help us to have a lot of feedback ecosystem in what they are pursuing from these models which they are inferring it.

Right? Thank you. like, for example, in your solution, which I’ve seen the demo because now it’s been a number of times in the demo booth I’ve seen this. If a patient is talking, right, and, you know, going through your recordings, let’s say, which he or she has just done, for him or her, it’s the most important thing. For the doctor, it’s like the next patient, right? How do I go to the next? But the patient will definitely go back and check the recording. Patient will definitely go back, as we all do, and for the rightful reasons. We check the second opinion with the doctor, right? But that information is only with me. What did the second doctor told me, right?

I check with you as a doctor, and then I say, all right, it’s a big operation. I should take a second opinion. And I go to her. I take another opinion, and then both they say the same thing. And I then still Google, right, and I take the opinion there. And I say, you know what? Looks to be that I need to get operated. But we’ll wait. And then five days. It’s free consulting. Four days later. will come to the doctor. There are four questions. So I think the user also need to put the feedback back in the ecosystem by using these models and then getting democratized. I think that’s how the trust layer is growing.

This is at a very high level. Policy level, things are going to be very, very different and I’m sure it’s a topic by itself. Some other day we’ll work on it and I’m

Deepak Tuli

Thank you very much. This Google doctor has been very, very popular in clinics. When we meet a doctor, they hate it. I have seen a board many, many times outside the physician’s cabin. No Google doctor, please. Okay. So we discussed, so next question to tell me, we’ve been talking about private hospital infrastructure. There’s a mass of high quality infrastructure available in the country with really great physicians. On the other side, we have PSC, massive pressure, less number of physicians. How do you think, you know, builders, when they think of building solutions for both of these perspectives? Should they think of a single solution? So I think the answer is yes. So I think the answer is yes.

So I think the answer is yes. What do you think of two different solutions? What do you think how it’s going forward?

Tanvi Lall

Yeah, so at PeoplePlus, which is an initiative of Aikstep, we do a lot of analysis on what are the adoption trends and for high need populations. So basically for people who are building in healthcare, education, agriculture, right? What’s the uptake? Who’s building what? Who’s not taking third -party solutions and trying to build internally? And there are a couple of points that have emerged in that thesis. I think the first is because AI is meant to be personalized and context -specific. It can deal with multilinguality and voice. There’s firstly a lot of opportunity to bridge some of these inequity gaps that exist. So the first thing is that today you can imagine as a builder solutions that are in some very, very regional, low -resource languages for the different beneficiaries.

And you can design them to be voice first, which in a way is inducing trust because now they are speaking to someone and they’re just not reading or… an answer and they don’t know who’s behind that solution. So the first aspect is that AI is meant to be personalized. So when you’re building solutions or, you know, I’m going to go a step further. I’m going to say it’s beyond a solution. It’s a transformation. You can create very customized transformations. That’s number one. I think the second thing over here is that when, you know, because it’s a very fragmented value chain, right? In the case of healthcare, like someone is paying, someone is using the technology, someone’s ultimately benefiting from it.

What we’ve realized is when you’re designing these transformations, a big part of a builder’s journey is not just making the tech stack, but spending time with people who will be adopting it, educating them at different levels to explain how this tech could get consumed or improve their life, right? So many times, and I mean, there are 700 plus healthcare startups in India who are doing all kinds of pilots and demos right now. And what we’ve realized… is that the demo phase goes really well. Like three months, six months. Because there is adopters who could be hospitals or other institutions sometimes play from a place of either fear or hype. Like I want to be aware of what’s going on and I’ll do the demo.

But after three months, this is just going to be a side window on my browser which I never go back to because it was never thought of as a solution that I would embed into my workflow. So you have to think from the start of this as a journey and not just a one -time switch. That I get that one -time contract or that one -time demo and imagine that that will convert into some kind of impact. Now to build that trust, it’s very different in a private hospital which is maybe much more urban, much more aware of what’s going on versus a PHC, right? And the people in the PHC. So I think a tech stack and the solution is one piece of it.

But when you’re designing the transformation which comes with education, awareness, trust -building activities, creating safety and maybe feedback and evals that may be a little bit more make sense for a PHC versus a hospital which might be very different. So when you’re thinking of the… Transformation stack, it has to be very different. And transformation is about much more than tech. And I think that’s where people should be spending a lot more time as builders. It’s just not about cracking that first pilot or that first deployment, but saying what will it take to go from pilot to population scale, right? What will that take? Because that is a very different journey. That’s a systems journey. That’s not a tech journey always.

Deepak Tuli

No, that’s super helpful. Continuing the same discussion, Dr. Gupta, when you look at policymaking, do you look at these two segments very differently or you think when you look at policy like private sector, PSC, you think health is one single sector or do you start defining, saying, okay, how will it work in public, how will it work in private sectors?

Dr. Rajendra Pratap Gupta

So if you look at the national health policy, this is the first time where I actually wrote the line both for private and public sector. In 2002, it was mostly written and even implied that it was only meant for public sector. I think if you really want to deliver care, you have to break that. barrier between private and public that’s how you will deliver care when patients has a problem it doesn’t see whether is the private or public should I get it the first hospital they get it so I think that was the thinking behind it and that’s what the policy is like

Deepak Tuli

oh that’s great learn something I move on to Padmini from your regional vantage point how should AI system be designed differently to reflect the diversity of context capabilities and care relative across countries

Padmini Vishwanath

yeah thank you first of all thank you so much for having us today WH was very glad to be representing the work that we do and you know listening to my co -panelists it’s interesting to hear about you know the the importance of tailoring because interesting and a little how do I say anxiety in using for me because the you know the work that we do is on the other end of the spectrum which is how do we create norms right how do we create norms and normative guidance to ensure that AI is equitable and moving in the right direction. And so I’ll talk from the regional perspective. And, you know, of course, we work with eight countries across CRO, and all of these countries and systems are at very, very varying levels of digital maturity, right?

But what we often find is the AI tools are developed for the most advanced, most connected tertiary institutions. And then adapted later for the most more remote settings, right? But we are finding that some of the countries, you know, pilots are looking at reversing this logic, which means that we start developing readiness frameworks for the most remote settings, understanding the frontline capabilities, device availabilities, all the factors that matter in AI readiness. You know, developing a framework, a framework for that level of remoteness. and then scaling it. And we do see that in contexts where we do that, there is higher provider trust, there is more equity. So I think that is, from our experience, we feel that maybe we need to slow down a little bit and look at how we can modernize existing legacy systems rather than kind of building on and adding new systems.

Yeah, I’ll stop there for now.

Deepak Tuli

Please, Annie. So maybe starting from here.

Jigar Halani

I’ll go first. I think voice. It’s a common factor. I think it is horizontal, not vertical, but it’s very, very important for the country, right? If I understand what Tamil doctor is speaking with the patient and convert it into Hindi and have that deployed in Delhi and Gujarat, I think I’m home essentially, right? I’m solving many problems for years together that has been prolonging in the country essentially. I think it’s by itself is a reward to the country and we should be fully liberating it. One thing that I’m very happy about is the mindset change. You know, that’s going to be the biggest thing. It’s not a technology problem. It’s a mindset problem. And that’s what I’ve seen every single person, you know, talking to, they started more believing in the fact that, you know, the time has arrived.

Nikhil Dhongari

I will tell two things. One thing is product. So I am very happy that a lot of discussion is going on about AI. So for any technology, anything to encourage the public, the thought process is very much important. That impact submitted created that impact to discuss things on AI basically. Now everyone will discuss on AI, like you beat a very rickshaw puller to the CEO, that discussion is happening. That is very much important to build systems, that thought process. Second thing, I visited few of the special start -ups. So very happy to see some start -ups are doing really great, like Eka scribe. So where the TVDM can use basically small clinics to reduce the burden of the clinicians from the non -clinical worker work.

So and there is one company and they are doing very great work on data anonymization. Because for many people, they have models to train after the advent of the technology. DPDP act so the data privacy and patient consent is very much important so they are working very really good in India so I’m very happy to such companies are there and they’re doing really wonderful so I’m very fed

Tanvi Lall

I think for me it would be the emphasis on AI ready data systems because this is across sectors everyone is realizing that AI is only as good as the data for the model and application layer that it has access to and I really want to give you guys credit for that because you are pioneers in terms of putting out data and making it available to that MCP server that came out in fact we cite that as an example we are working very closely with Mosby right now and they want to make their statistical data sets available to the world they have put out their first MCP about a week ago but just the fact that you are you know institutions are just not extractive when it comes to data but they want to give it back so that others can build on top of it is very important so in health it’s crucial that happens because otherwise there is no personalization happening.

Padmini Vishwanath

So I would say I think so far we have looked at a lot of quantitative measures of adopting AI in health, diagnosis, accuracy, number of patient visits, et cetera. But I think this time around we are seeing more discussions around the qualitative dimensions, right, empathy, dignity, care. And, you know, it’s interesting because in one of the pilots we are conducting on palliative care, we, you know, we didn’t even think about it, but a caregiver and a patient, palliative care patient visiting a nurse, you know, that’s their only source of human connection during that week. And so how does AI kind of change that dynamic of caregiving, right, in those little moments they spend together in the clinic?

So I think. The increased conversation around this. and just acknowledgement of not just the quantitative but also the qualitative dimensions is something that I’m personally really looking forward to.

Deepak Tuli

Thank you. The objective of this question was not to get the promotion for Eka, just a disclaimer. I will… No, but thank you. This was super insightful. Audience, any question?

Audience member 1

Sir, I just had a question. You said a voice language is not as new. So that mostly 90 % of them are on the cloud. So that needs to be on the edge only or on the cloud or hybrid?

Jigar Halani

No, no, of course. I think… Do you use ChatGPT? Yes. One of the servers I hosted over in India. No, I’m just saying there’s cost factor is also there and they have data privacy also there, so… The moment you add cost, as long as it is in India, I think we are home. I don’t think so we could be… Ever cheaper.

Audience member 1

So I was just… I’m asking a suggestion from you, so like what model should, like someone who’s creating such solution for voice and translation, multilingual, let’s just target 22 languages. So where should the MCV or the influencer or the activity server should be hosted? On the edge, on a gadget, like a mobile phone or something, audio recorder, or, you know, hybrid?

Jigar Halani

I would say, I would say it depends on the use case. If you have a very particular use case, very tiny one in a remote place, edge would be the solution. You don’t have a choice. You will lack behind the connectivity and few other aspects as well.

Audience member 1

It will synchronize once a month or once a week or once a day?

Jigar Halani

No, voice is something you need to have connectivity in play.

Audience member 1

Okay.

Jigar Halani

You can’t be having offline things. That’s my view at least. People are trying. I think Saboom had something on, on the device. 90 % we should go for.

Audience member 1

Connecting with the cloud or the server?

Jigar Halani

Yes.

Audience member 1

Even if it’s a local India hosted server?

Jigar Halani

That’s correct.

Audience member 2

Hello everyone, we have seen a lot of stalls in the expo showing AI powered documentation and diagnosis. I am a dentist and I am currently pursuing MBA in analytics. So I am curious how far this AI, Indian based AI tools are relying on Indian data rather than global data sets.

Dr. Rajendra Pratap Gupta

Depends on what they are claiming that’s first. And the other side I also represent the Mayo Clinic strategy in India. So as Mayo Clinic platform we are opening in partnership with some of our data sets to look at but also collaborating with hospitals inside to leverage each other’s anonymized data sets. So I think important. Thank you. point to note is the culture of data is missing. I mean, we still have to get the culture of data to get to have those AI systems that are based on Indian population. I think this is still far away. I think with ABDM sitting next to we have 860 million of IDs, but if the number of records on ABDM, if you check, they are not what we want to be.

So I think we’re still not there in terms of if someone makes a claim, be careful. Thank you.

Deepak Tuli

That’s great. We talked about what we really like, what we’ve seen the change fundamentally. But do you guys also think there are still few items we’re lacking behind as a country, as a health, where we should have been already seen? Or you think we are on the right path? And if we are on the right path, then what do you think would be the great outcomes in next year?

Dr. Rajendra Pratap Gupta

My answer is very frankly, even at the cost of obesity. See, the issue is not about the usefulness of technology or the use case. It’s about ethics and doing the right things. Most of the people are not using, not because the UX, UI, technology, outcomes, everyone knows that. How many doctors would actually want to tell what they charge for a prescription, how many prescriptions they may, and why they write three antibiotics for one case. So I think it is about regulating that unethical part, the way they were able to crack it, you’ll be seeing the mass adoption. The challenge lies in the medical practices and medical ethics, not on the solutions per se. Otherwise, we would be the most adopted nation in terms of digital technologies.

Deepak Tuli

The great, last night, we were having this conversation where in China, I was surprised to hear this, that in the real time, when a physician, is writing a prescription there’s a data going back and if there are errors it’s coming back and the doctors are getting flagged if they continuously do this thing and then that’s the way one way of controlling what you really said and you know think of us we are still in metros literate but think of people in tier two tier three having three antibiotics at the same time i have seen in bombay a chemist saying as you throw my religion yeah we pop up so i think it’s an issue about practice medical practices good good pharmacy practices good prescription practices to follow i mean you could have given a cold and a cup syrup that would have made him money too

Nikhil Dhongari

i want to give as if you had a point just want to add and she asked one question how many models are training on indian data so you said now that where we are lagging behind so we want to say like the behavioral change is very much important so we have solutions even And we gave very, with CDAC, we gave one e -shift flight, which is almost free to small hospitals. And all the government hospitals, including Ames, having the HMI solution where they can create the language records. But some of the docs are not ready to do, because they said that we are very much accustomed to writing on the paper only. So still they are doing, and we are accepting.

So where we are losing the context data from the major public hospitals. So where we need some tough stance, because I am now working in National Tadati, but before I was in Railways. Now Railways totally stopped physical prescription. Because they took a decision that no more physical prescription. They are doing only now online prescription, everything, even lab record, everything integrated. So they took one decision. They retested. So we need some tough decisions, and also we need some behavioral change, where we have to go for creating language records. then only we can give some context data to the Indian startups where our models can be deployed and trained then we can get rid of it.

Deepak Tuli

Thank you very much you have been a great panel thank you very much for all your insight and I am sorry in the interest of time we will have to wrap up but before we close this session a sincere gratitude and thank you to all our panelists I request Deepak to just present a moment to from our behalf thank you Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (14)
Factual NotesClaims verified against the Diplo knowledge base (3)
Additional Contextmedium

“An attempt to build a “closed‑loop” system similar to Google’s search failed because the engine could not return results in native Indian languages.”

The knowledge base notes that indigenous-language technology barriers are largely structural, political and ethical, and that while the technology exists it cannot be delivered effectively due to platform restrictions, which aligns with the reported difficulty in returning results in native Indian languages [S99].

Additional Contexthigh

“Health‑care tolerates virtually no variance in safety or privacy, demanding extensive supervision for any AI deployment.”

Privacy concerns around AI-enhanced functionalities are highlighted as intensifying, and broader discussions stress the need for careful, evidence-based engagement with AI to manage safety and privacy risks, providing additional context to the claim about strict supervision requirements [S106] and the mismatch between public fear and measured impact of AI [S57].

Additional Contextlow

“There is a gap between enthusiasm and reality: “desire does meet reality” only sporadically.”

Analyses of AI adoption note a mismatch between public expectations (or enthusiasm) and the measured impact of AI technologies, underscoring that optimism often outpaces practical outcomes, which adds nuance to the reported gap between desire and reality [S57].

External Sources (109)
S1
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — – Dr. Rajendra Pratap Gupta- Nikhil Dhongari
S2
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — – Abhay Soi- Padmini Vishwanath – Vikalp Sahni- Abhay Soi Despite both being from the same organization (Eka), they sh…
S3
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — – Abhay Soi- Dr. Rajendra Pratap Gupta- Padmini Vishwanath – Abhay Soi- Jigar Halani- Padmini Vishwanath
S4
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — -Dr. Rajendra Pratap Gupta- Advisor to Health Minister, instrumental in defining ABDM white paper, involved in National …
S5
DC-DH: Health Digital Health &amp; Selfcare – Can we replace Doctors in PHCs — – Rajendra Pratap Gupta: Chairman of the board for HIMSS India, moderator of the discussion Rajendra Pratap Gupta: Fan…
S6
Conversational AI in low income &amp; resource settings | IGF 2023 — Dr. Rajendra Pratap Gupta, Health Parliament – Private Sector – India Prof. Rajendra Pratap Gupta, Health Parliament …
S7
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 1- Founder of Corral Inc -Audience member 6- Role/title not mentioned
S8
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S9
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S11
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Jigar Halani- Nitin Gupta – Peter Panfil- Jigar Halani- Sanjay Kumar Sainani
S12
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — -Deepak Tuli- Panel discussion moderator
S13
Nepal Engagement Session — -Ms. Deepika: Mentioned at the end to felicitate Mr. Alok, specific role or title not mentioned
S14
Global Perspectives on Openness and Trust in AI — -Audience member 2- Part of a group from Germany
S15
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S16
The Arc of Progress in the 21st Century / DAVOS 2025 — – Paula Escobar Chavez: Audience member asking a question (specific role/title not mentioned)
S18
Transforming Health Systems with AI From Lab to Last Mile — – Vikalp Sahni- Richard Rukwata
S19
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — – Tanvi Lall- Padmini Vishwanath Tanvi Lall argues for different transformation approaches for different settings (priv…
S20
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Giordano Albertazzi — -Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned
S21
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned
S22
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned
S23
CEOs unprepared for impact of generative AI, reveals Deloitte survey — A globalsurvey conducted by Deloitte’s AI institutereveals that top executives still need to prepare to handle the impac…
S24
Keynote by Naveen Tewari Founder &amp; CEO, inMobi India AI Impact Summit — “the third is is a very disproportionate rate of growth of economic prosperity because of all the factors that the level…
S25
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — What AI can do. Having said that. On the investment side, India is just a wonderful area for us. We were one of the earl…
S26
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-bharats-health_-addressing-a-billion-clinical-realities — But I think today it’s affecting our tasks. It’s affecting tasks of efficiency. You know, we’ve already started doing pr…
S27
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — However, while reaping the benefits of digital health technology, it is crucial to address privacy and security concerns…
S28
WS #162 Overregulation: Balance Policy and Innovation in Technology — Tercova emphasizes that patient privacy, data protection, and minimizing bias in algorithms are non-negotiable aspects o…
S29
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — But this adaptation won’t happen without effort. It requires educators willing to experiment with new approaches even wh…
S30
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Ng emphasized that whilst efficiency gains from AI point solutions might yield modest improvements, transformative workf…
S31
AI innovations reshape food assistance in India — The UN World Food Programme’s (WFP) Artificial Intelligence Impact Summit in New Delhishowcased innovations transforming…
S32
AI and the future of digital global supply chains (UNCTAD) — In conclusion, AI has emerged as a powerful tool that can significantly impact trade logistics. It can optimize routes a…
S33
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — – Amin Nasser- Julie Sweet – Julie Sweet- Amin Nasser Development | Economic Focus on outcomes and value creation rat…
S34
Panel Discussion AI in Healthcare India AI Impact Summit — If you think about certainly one of the biggest challenges in the U.S., India has this too, Sangeeta mentioned some of t…
S35
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — Gradual integration approach focusing on augmenting human capabilities rather than immediate replacement
S36
How nonprofits are using AI-based innovations to scale their impact — Very low disagreement level with high collaborative spirit. The few disagreements were primarily tactical rather than st…
S37
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — The symposium demonstrated remarkably high consensus among speakers on fundamental AI principles, implementation goals, …
S38
How to make AI governance fit for purpose? — Legal and regulatory | Development The speed of AI development creates uncertainty and challenges that exceed current c…
S39
What is it about AI that we need to regulate? — Multiple sessions emphasized the importance of avoiding one-size-fits-all approaches. InMain Session 2, Mlindi Mashologu…
S40
AI-assisted multi-disease CT scans launched in Beijing hospital — Beijing United Family Hospital and Alibaba DAMO Academyhave launched a joint effortto bring advanced AI screening into c…
S41
Harmonizing High-Tech: The role of AI standards as an implementation tool — Philippe Metzger:Thank you, Bilel. Maybe to be as succinct as possible, just would like to mention four areas, which I t…
S42
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The level of disagreement is moderate but significant for implementation. While speakers share fundamental goals of resp…
S43
MedTech and AI Innovations in Public Health Systems — Artificial intelligence | Social and economic development Clinical Decision Support & Care Coordination
S44
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Björn Berge:Thank you very much, Ambassador Schneider, and a very good afternoon to all of you. It’s really great to be …
S45
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S46
Conversation: 02 — This reframes trust from a soft concept to a foundational technical requirement, positioning it as critical infrastructu…
S47
Keynote-Martin Schroeter — “Reliability, governance, and human integration are not features, they are prerequisites”[14]. “The work ahead is hard, …
S48
Unleashing Digital Trade and Investment for Sustainable Development (UN ESCAP) — It is essential to have the proper infrastructure, regulations, and policy dialogue between the private and public secto…
S49
Artificial Intelligence &amp; Emerging Tech — According to the information provided, Latin America is predicted to become an ageing society by 2053, with the number o…
S50
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S51
The fading of human agency in automated systems — To address concerns about automation, policy and governance discussions often invoke the concept of ‘human-in-the-loop’ …
S52
WS #162 Overregulation: Balance Policy and Innovation in Technology — Tercova emphasizes that patient privacy, data protection, and minimizing bias in algorithms are non-negotiable aspects o…
S53
WS #283 AI Agents: Ensuring Responsible Deployment — User control and human oversight are essential safeguards, particularly for high-impact decisions that are difficult to …
S54
How to make AI governance fit for purpose? — Legal and regulatory | Development The speed of AI development creates uncertainty and challenges that exceed current c…
S55
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Moderate disagreement level with significant implications for AI deployment strategies. The disagreements reflect differ…
S56
Why science metters in global AI governance — The discussion revealed surprisingly few fundamental disagreements among speakers, with most conflicts arising around im…
S57
The mismatch between public fear of AI and its measured impact — Inmedicine and science, AI has shown promise in pattern recognition and data analysis. Deployment is cautious, as clinic…
S58
Driving Indias AI Future Growth Innovation and Impact — “And the regulations have to be agile because the technology is moving at such a fast pace that you cannot anchor the re…
S59
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S60
Policymaker’s Guide to International AI Safety Coordination — This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incent…
S61
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Jigar highlights that trust encompasses accuracy, data privacy, and transparent model governance. Trust, safety and pat…
S62
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Alex Moltzau: Yes, thank you so much. My name is Alex Maltzau. And I work as a second national expert in the European AI…
S63
People trust doctors more than AI — New research shows that most peopleremain cautious about using ChatGPT for diagnosesbut view AI more favourably when it …
S64
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — “Trustability because we need to trace the systems, the models, the data that we use for AI.”[49]. “Verifiability is the…
S65
The Intelligent Coworker: AI’s Evolution in the Workplace — Christoph Schweizer advocated for new measurement approaches, emphasising “adoption and usage,” “employee satisfaction s…
S66
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Sarah Nicole: Please share your thoughts with us on this issue. Yeah, thank you very much for the invitation to give thi…
S67
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — This challenges the metrics-driven approach to measuring startup ecosystem success, emphasizing qualitative ecosystem de…
S68
Shaping the Future AI Strategies for Jobs and Economic Development — -Infrastructure and Energy Challenges: Significant discussion around the massive infrastructure requirements for AI depl…
S69
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — Decision-making should be pushed to network level for security and optimization, with limited cases going to edge or reg…
S70
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — So I’ll keep it brief. I think what I’m looking forward to with all the conversations here and in other parts of the wor…
S71
Democratizing AI Building Trustworthy Systems for Everyone — I think open source is going to be in my mind a critical aspect of it. You’ll have to see how far open source movement t…
S72
Safeguarding Children with Responsible AI — Cultural, contextual, and inclusion considerations
S73
Press Conference: Closing the AI Access Gap — Data strategies are another critical aspect in the AI era. Countries need robust data strategies that include sharing fr…
S74
AI as critical infrastructure for continuity in public services — Human factors such as fear of replacement and communication style are major barriers to AI adoption. Simple, clear messa…
S75
A Digital Future for All (morning sessions) — Aerts argues that AI and digital tools have the potential to significantly improve healthcare outcomes and reduce health…
S76
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Development | Infrastructure Examples include tumor board preparation, holistic patient data aggregation, post-discharg…
S77
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — I think dramatically. Adoption of, and it’s not because hospitals or health care providers desire it, I think it’s becom…
S78
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-bharats-health_-addressing-a-billion-clinical-realities — I think dramatically. Adoption of, and it’s not because hospitals or health care providers desire it, I think it’s becom…
S79
AI could save billions but healthcare adoption is slow — AI is being hailed as atransformative force in healthcare, with the potential to reduce costs andimprove outcomesdramati…
S80
The mismatch between public fear of AI and its measured impact — Inmedicine and science, AI has shown promise in pattern recognition and data analysis. Deployment is cautious, as clinic…
S81
Panel Discussion AI in Healthcare India AI Impact Summit — Chris Ciauri provided concrete examples of AI applications already showing results. Banner Health’s use of Claude to sum…
S82
MedTech and AI Innovations in Public Health Systems — The discussion revealed a critical challenge: most AI solutions are looking for problems rather than addressing specific…
S83
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Björn Berge:Thank you very much, Ambassador Schneider, and a very good afternoon to all of you. It’s really great to be …
S84
Keynote-Martin Schroeter — This comment identifies trust as the fundamental prerequisite for AI adoption, synthesizing the technical, operational, …
S85
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S86
Conversation: 02 — This reframes trust from a soft concept to a foundational technical requirement, positioning it as critical infrastructu…
S87
Collaborative AI Network – Strengthening Skills Research and Innovation — This comment shifted the discussion from technical implementation to governance and trust frameworks. It influenced othe…
S88
Scaling AI for Billions_ Building Digital Public Infrastructure — Both government and private sector initiatives are developing these capabilities, with emphasis on making frameworks acc…
S89
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Roy Jakobs envisions a future where AI agents handle complete healthcare tasks, from scheduling and data aggregation to …
S90
Artificial Intelligence &amp; Emerging Tech — According to the information provided, Latin America is predicted to become an ageing society by 2053, with the number o…
S91
Multistakeholder Dialogue on National Digital Health Transformation — Sean Blaschke: Thanks, Leah. I’m going to try to apply the same architecture framework to legislation, policy, complia…
S92
Contents — To take one example where there is growing energy and innovation: digital identity can transform healthcare systems. As …
S93
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — Ravindra Gupta:Second, I don’t think that technology at any time failed. Actually, it proved that it was ready. So wheth…
S94
WSIS Action Line C7: E-health – Fostering foundations for digital health transformation in the age of AI — ## Next Steps and Future Initiatives 1. **Electronic Health Records**: Comprehensive patient data management systems A…
S95
Acknowledgements — Militaries first developed UAVs early in the 20 th century. However, various technological obstacles constrained what …
S96
Informal Stakeholder Consultation Session — Sorry, I mean, there were some technical issues before. Well, thank you very much, Ambassador, for this opportunity for …
S97
Accessible e-learning experience for PWDs-Best Practices | IGF 2023 WS #350 — Gonola:Good morning, ladies and gentlemen, and for those online, good morning, good afternoon and good evening. This ses…
S98
Technology in a Turbulent World — He recounted an instance where the board too small and they didn’t have the level of experience needed were not addresse…
S99
Open Forum #73 Indigenous Peoples Languages in a Digital Age — Indigenous language technology barriers are primarily structural, political, and ethical rather than technical – the tec…
S100
Apple’s quiet race to replace Google Search with its own AI — Apple occasionally seems out of step with public sentiment, particularly when it comes to AI. A revealing example, highl…
S101
RESEARCH PAPERS — The ability to seek out and identify relevant information on the internet has been a crucial innovation. It has relied l…
S102
29, filed Jan. 22, 2010, at 9-10. — – Better treatment evaluations. Therapeutic drugs are not tested across all relevant populations. For example, pharmaceu…
S103
Connecting open code with policymakers to development | IGF 2023 WS #500 — Henri Verdier:It totally takes a point. And that’s interesting, because if you do observe the story of governments, they…
S104
AI &amp; Diplomacy: Managing New Frontiers – ADF 2024 — The discussion concluded that although regulatory frameworks recognise the importance of these issues, the gap between i…
S105
How African knowledge and wisdom can inspire the development and governance of AI — There is a palpable desire to bridge the gap between theoretical discussion and on-the-ground realities. This demand for…
S106
Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities — Apple, Microsoft, and Google arespearheadinga technological revolution with their vision of AI smartphones and computers…
S107
https://dig.watch/event/india-ai-impact-summit-2026/building-climate-resilient-systems-with-ai — It looks like the slides are not there. There’s a certain, turning on the screen. There it goes. I will say that while w…
S108
Democrats shift stance on GENIUS Act — Senators voted 66-32 to advance theGENIUS Act, a bill aimed at regulating stablecoins. Sixteen Democrats joined Republic…
S109
Main Session | Policy Network on Artificial Intelligence — Brando Benifei: Yes. So obviously there have been around four important resolutions this year regarding AI. One was pr…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Abhay Soi
8 arguments149 words per minute2428 words975 seconds
Argument 1
AI as a CEO KPI driven by consumer behavior, ESG, and efficiency
EXPLANATION
Abhay explains that AI adoption is being driven by changing consumer expectations, the need to improve ESG scores, and the pursuit of operational efficiency. He sees AI as a low‑hanging fruit that can enhance hospital performance without being a full ecosystem overhaul.
EVIDENCE
He describes how patients now search for doctors on various platforms, influencing hospitals to improve digital collateral, and notes that ESG considerations push institutions to adopt AI tools that can answer strategic questions. He also points to specific efficiency gains such as predictive bed availability and reduced clinician time spent on data entry as examples of low-hanging fruits [67-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Consumer-driven adoption and efficiency gains are highlighted in [S24], while Deloitte’s survey shows CEOs need to embed AI in performance metrics [S23]; workflow efficiency and KPI alignment are discussed in [S30].
MAJOR DISCUSSION POINT
AI as a CEO KPI driven by consumer behavior, ESG, and efficiency
Argument 2
Creation of a 15‑year patient data lake and real‑time EMR to enable AI
EXPLANATION
Abhay outlines that Max has built a comprehensive data lake aggregating fifteen years of patient records, which is continuously updated in real time. This foundation supplies the high‑quality data needed for AI applications across the hospital system.
EVIDENCE
He states that the organization created a common-size data lake covering all patients over the last fifteen years and that it operates on a real-time basis today, forming the backbone for AI initiatives [14-15].
MAJOR DISCUSSION POINT
Creation of a 15‑year patient data lake and real‑time EMR to enable AI
AGREED WITH
Dr. Rajendra Pratap Gupta, Nikhil Dhongari
Argument 3
AI must operate under strict supervision to ensure patient safety and data privacy
EXPLANATION
Abhay emphasizes that, unlike less critical domains, healthcare AI requires rigorous oversight because errors can directly affect patient outcomes and privacy. Continuous supervision is necessary until AI tools are proven safe and reliable.
EVIDENCE
He notes that healthcare has very little tolerance for deviation, requiring extensive supervision to safeguard patient safety and data privacy, and that AI must be closely monitored for years to come [52-59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Privacy and security requirements for digital health are emphasized in [S27]; the need to balance regulation with innovation is outlined in [S28]; broader AI governance considerations are covered in [S37].
MAJOR DISCUSSION POINT
AI must operate under strict supervision to ensure patient safety and data privacy
AGREED WITH
Dr. Rajendra Pratap Gupta
DISAGREED WITH
Dr. Rajendra Pratap Gupta
Argument 4
Institutional culture requires circumspection, learning curves, and redesign of work processes for AI integration
EXPLANATION
Abhay argues that successful AI adoption depends on a cautious institutional mindset, extensive learning, and the re‑engineering of existing clinical workflows. He stresses that hospitals must not rush adoption without understanding the impact on processes.
EVIDENCE
He describes the need for circumspection, a steep learning curve, and the necessity to redesign many work processes before AI can be rolled out, citing internal discussions with technology teams and the importance of getting it right the first time [90-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The difficulty of workflow redesign and the need for careful change management are discussed in [S30]; a gradual, learning-oriented integration approach is described in [S35]; the “make haste slow” principle reinforces cautious adoption in [S38].
MAJOR DISCUSSION POINT
Institutional culture requires circumspection, learning curves, and redesign of work processes for AI integration
AGREED WITH
Jigar Halani, Tanvi Lall
Argument 5
Demographic dividend makes AI essential for predictive health and scaling care delivery
EXPLANATION
Abhay points out that India’s youthful population will age over the next decade, creating a massive demand for healthcare that cannot be met by existing infrastructure. AI is presented as a necessary tool for predictive health, remote care, and scaling doctor expertise.
EVIDENCE
He explains that the average age will rise to European levels in 15 years, leading to insufficient hospitals and doctors, and argues that AI-enabled predictive health and remote care are essential to meet future needs [128-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s demographic dividend and its impact on future health demand are detailed in [S1]; the productivity boost from consumer-intelligence-driven AI is noted in [S24].
MAJOR DISCUSSION POINT
Demographic dividend makes AI essential for predictive health and scaling care delivery
Argument 6
AI enhances operational efficiency by providing predictive analytics for bed availability and safety monitoring.
EXPLANATION
Abhay describes how AI tools are used to forecast vacant beds and support safety measures, helping hospitals manage resources more effectively and improve patient flow.
EVIDENCE
He notes that the organization has started doing predictive analysis of vacant beds and is working on safety measures [22-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Predictive analysis of vacant beds is reported in [S26]; reductions in administrative burden that improve efficiency are highlighted in [S34].
MAJOR DISCUSSION POINT
AI‑driven predictive analytics improve hospital operational efficiency
Argument 7
AI reduces clinicians’ administrative burden by automating data capture through digital forms, allowing more time for patient care.
EXPLANATION
He explains that clinical data that previously required manual entry is now collected via app‑based forms, decreasing the time clinicians spend gathering histories and increasing the value they can provide.
EVIDENCE
Abhay states that data collection is now done through forms in their apps, which collates information and lets clinicians spend less time gathering history and more on value-adding tasks [24-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from paperwork to digital forms that frees clinician time is described in [S34]; task-level AI improvements that streamline workflows are covered in [S33].
MAJOR DISCUSSION POINT
Automation of data collection frees clinician time for higher‑value care
Argument 8
Early AI adoption focuses on improving specific tasks rather than full institutional integration.
EXPLANATION
He points out that the initial impact of AI is seen at the task level—enhancing efficiency and safety—before it becomes embedded in the broader institutional ecosystem.
EVIDENCE
Abhay remarks that the early days of AI affect tasks of efficiency and safety, and that full ecosystem adoption is still forthcoming [19-21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Early task-level wins and the need for broader ecosystem integration are discussed in [S33]; workflow redesign challenges that precede full adoption are noted in [S30].
MAJOR DISCUSSION POINT
Task‑level AI improvements precede comprehensive institutional adoption
V
Vikalp Sahni
1 argument138 words per minute877 words378 seconds
Argument 1
Need to balance rapid AI adoption with policy and regulatory challenges
EXPLANATION
Vikalp raises concerns that while AI adoption is accelerating, it must be aligned with emerging policies, standards, and regulatory frameworks such as NABH and ABDM. He asks how hospitals can navigate these constraints without stalling innovation.
EVIDENCE
He questions whether the wave of AI adoption is being accompanied by policy and regulation, specifically mentioning NABH, ABDM, and the need for faster adoption while respecting regulatory requirements [30-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between fast AI rollout and over-regulation is explored in [S28]; broader AI regulatory frameworks are examined in [S39]; technical and policy challenges for large-scale model training are highlighted in [S18].
MAJOR DISCUSSION POINT
Need to balance rapid AI adoption with policy and regulatory challenges
AGREED WITH
Dr. Rajendra Pratap Gupta, Nikhil Dhongari
DISAGREED WITH
Abhay Soi
D
Deepak Tuli
1 argument141 words per minute947 words402 seconds
Argument 1
AI should be a core KRA for hospital leadership, not just a hype project
EXPLANATION
Deepak asks whether AI adoption has become a priority KPI for CEOs, similar to earlier digitisation goals like online billing and accreditation. He seeks confirmation that AI is now embedded in leadership performance metrics.
EVIDENCE
He inquires if AI adoption is now a key result area for hospital CEOs, comparing it to past priorities such as online billing and JCIA compliance [60-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Deloitte’s survey shows CEOs are still unprepared for AI’s impact, underscoring the need for KPI integration [S23]; productivity gains from AI that justify leadership focus are noted in [S24].
MAJOR DISCUSSION POINT
AI should be a core KRA for hospital leadership, not just a hype project
D
Dr. Rajendra Pratap Gupta
4 arguments192 words per minute849 words264 seconds
Argument 1
ABDM’s digital ID ecosystem provides interoperable backbone for nationwide health data
EXPLANATION
Dr. Gupta explains that the ABDM initiative has generated over 860 million ABHA IDs, creating a unified digital identity that enables interoperable health records across the country. This infrastructure is the foundation for nationwide data exchange.
EVIDENCE
He cites the creation of 860 million ABHA IDs and the establishment of a digital infrastructure that can be leveraged to empower users and eliminate redundant schemes [184-186].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The creation of 860 million ABHA IDs and the interoperable digital health backbone are documented in [S1].
MAJOR DISCUSSION POINT
ABDM’s digital ID ecosystem provides interoperable backbone for nationwide health data
AGREED WITH
Abhay Soi, Nikhil Dhongari
Argument 2
Ethical prescribing practices and regulation are the main barriers to wider AI uptake
EXPLANATION
Dr. Gupta argues that the biggest obstacle to AI adoption is not technology but unethical medical practices, especially irrational prescribing. He calls for stronger regulation and ethical standards to enable broader AI use.
EVIDENCE
He notes that unethical prescribing, such as over-use of antibiotics, and lack of regulation are the primary barriers, emphasizing that addressing medical ethics is essential for mass AI adoption [407-414].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ethical and regulatory concerns that can hinder AI deployment are discussed in [S28]; broader AI governance and regulatory considerations are covered in [S39].
MAJOR DISCUSSION POINT
Ethical prescribing practices and regulation are the main barriers to wider AI uptake
AGREED WITH
Abhay Soi
DISAGREED WITH
Abhay Soi
Argument 3
Digital health timelines are accelerating; three‑year horizons are now realistic
EXPLANATION
Dr. Gupta observes that policy discussions have shifted from decade‑long horizons to three‑year plans, indicating a rapid acceleration in digital health implementation and expectations.
EVIDENCE
He points out that the national health policy now references three-year timelines rather than a decade, reflecting faster progress in digital health initiatives [312-314].
MAJOR DISCUSSION POINT
Digital health timelines are accelerating; three‑year horizons are now realistic
AGREED WITH
Vikalp Sahni, Nikhil Dhongari
Argument 4
A strong data‑sharing culture is essential for effective AI deployment, but current practices lack sufficient data openness.
EXPLANATION
Gupta emphasizes that without a culture of data sharing, the vast number of ABHA IDs cannot be leveraged for AI, limiting the availability of high‑quality Indian health records needed for model training.
EVIDENCE
He observes that the culture of data is missing, noting that despite 860 million IDs, the records are not yet usable for AI development [398-401].
MAJOR DISCUSSION POINT
Data culture and openness are prerequisites for AI success in health
N
Nikhil Dhongari
4 arguments151 words per minute707 words279 seconds
Argument 1
Federated architecture enables Indian‑specific AI models and reduces bias
EXPLANATION
Nikhil describes how ABDM’s federated architecture allows AI models to be trained on Indian data, ensuring relevance to local clinical realities and minimizing bias that can arise from foreign datasets.
EVIDENCE
He explains that the federated architecture lets Indian startups develop models on local data, avoiding bias and ensuring contextual relevance for the Indian population [203-210].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s sovereign AI model and federated approach that keep training data local are described in [S25].
MAJOR DISCUSSION POINT
Federated architecture enables Indian‑specific AI models and reduces bias
AGREED WITH
Abhay Soi, Dr. Rajendra Pratap Gupta
Argument 2
Behavioral change, mandatory data capture, and tough policy decisions are needed to feed AI pipelines
EXPLANATION
Nikhil stresses that without a shift in clinician behavior toward digital data capture and decisive policy actions (e.g., moving to online prescriptions), AI pipelines will lack the necessary high‑quality data for training and deployment.
EVIDENCE
He cites examples such as the transition to online prescriptions in the Railways, the need for mandatory language-record capture, and the resistance of some doctors to abandon paper, highlighting the importance of behavioral change and strong policy mandates [416-426].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Mandatory digital data capture and policy levers such as online prescriptions are highlighted in [S18]; the need for strong policy to support AI pipelines is emphasized in [S28]; workflow redesign requirements are discussed in [S30].
MAJOR DISCUSSION POINT
Behavioral change, mandatory data capture, and tough policy decisions are needed to feed AI pipelines
AGREED WITH
Vikalp Sahni, Dr. Rajendra Pratap Gupta
Argument 3
Scaling AI requires robust Indian data, public‑private collaboration, and continuous model refinement
EXPLANATION
Nikhil argues that to scale AI across India, a strong foundation of Indian health data, collaboration between public and private sectors, and ongoing model improvement are essential.
EVIDENCE
He references the federated architecture and the need for Indian-specific models, emphasizing that robust local data and public-private partnerships are critical for scaling AI solutions [203-210].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Scaling AI beyond pilots and the importance of data sharing are covered in [S33]; investment in Indian AI ecosystems and public-private partnerships are noted in [S25]; the need for a data-sharing culture is mentioned in [S1].
MAJOR DISCUSSION POINT
Scaling AI requires robust Indian data, public‑private collaboration, and continuous model refinement
AGREED WITH
Abhay Soi, Vikalp Sahni, Dr. Rajendra Pratap Gupta
Argument 4
Developing Indian‑centric models avoids bias and aligns AI with local clinical realities
EXPLANATION
Nikhil highlights that AI models built on Indian clinical data avoid the bias inherent in models trained on foreign datasets and better reflect the nuances of local patient populations.
EVIDENCE
He notes that Indian models, trained on local longitudinal records and language data, reduce bias and are more suitable for the Indian clinical context [207-210].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The sovereign AI model that trains on Indian data to avoid bias is discussed in [S25]; the relevance of locally-trained models for Indian clinical contexts is reinforced in [S24].
MAJOR DISCUSSION POINT
Developing Indian‑centric models avoids bias and aligns AI with local clinical realities
J
Jigar Halani
4 arguments190 words per minute1209 words380 seconds
Argument 1
Cloud vs. edge deployment decisions affect cost, privacy, and latency for AI services
EXPLANATION
Jigar explains that the choice between cloud and edge computing depends on the specific use case, with edge being necessary for remote, low‑connectivity scenarios, while cloud offers scalability but raises cost and privacy considerations.
EVIDENCE
He states that for tiny remote use cases edge is required, whereas cloud is generally used, and mentions cost and data-privacy implications of hosting models in India versus abroad [365-381].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Privacy, data-sovereignty, and cost considerations for cloud vs. edge are examined in [S28]; the “make haste slow” principle warns of rushed deployment choices in [S38]; cost-impact of deployment architectures is noted in [S30].
MAJOR DISCUSSION POINT
Cloud vs. edge deployment decisions affect cost, privacy, and latency for AI services
Argument 2
Trust is built on model accuracy, contextual relevance, and continuous citizen feedback
EXPLANATION
Jigar argues that trust in AI stems from delivering accurate results, being contextually appropriate for Indian users, and incorporating feedback loops from citizens and clinicians to continuously improve models.
EVIDENCE
He discusses trust as accurate outcomes, contextual relevance, and the need for citizen feedback, illustrating with examples of second-opinion workflows and the importance of feedback for model refinement [219-233].
MAJOR DISCUSSION POINT
Trust is built on model accuracy, contextual relevance, and continuous citizen feedback
DISAGREED WITH
Vikalp Sahni
Argument 3
A shift in mindset—from skepticism to acceptance—is critical for AI adoption across staff
EXPLANATION
Jigar notes that moving from doubt to belief among healthcare staff is essential for AI uptake, emphasizing that the change is cultural rather than purely technological.
EVIDENCE
He observes a growing belief among professionals that the time for AI has arrived, describing the transition from skepticism to acceptance as a mindset shift [333-337].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gradual cultural adoption and the need for mindset change are described in [S35]; workflow redesign that includes staff buy-in is discussed in [S30].
MAJOR DISCUSSION POINT
A shift in mindset—from skepticism to acceptance—is critical for AI adoption across staff
AGREED WITH
Abhay Soi, Tanvi Lall
Argument 4
Voice translation across regional languages is a key lever for nationwide AI accessibility
EXPLANATION
Jigar highlights that converting speech from regional languages into a common language (e.g., Hindi) can dramatically improve access to AI services across diverse linguistic regions in India.
EVIDENCE
He gives the example of translating a Tamil doctor’s speech into Hindi for use in Delhi and Gujarat, illustrating how language conversion can solve longstanding accessibility problems [330-332].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multilingual AI challenges and the need for language-agnostic models are highlighted in [S18]; consumer-intelligence-driven AI that addresses language diversity is noted in [S24].
MAJOR DISCUSSION POINT
Voice translation across regional languages is a key lever for nationwide AI accessibility
A
Audience member 2
1 argument121 words per minute52 words25 seconds
Argument 1
Concern that many Indian AI tools rely on global datasets rather than Indian patient data
EXPLANATION
The audience member questions the extent to which Indian AI solutions are trained on domestic health data versus imported global datasets, highlighting a potential gap in relevance and accuracy.
EVIDENCE
He asks how much Indian-based AI relies on Indian data as opposed to global data sets, seeking clarification on data provenance for Indian AI tools [390-392].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The push for sovereign, India-specific AI models that avoid reliance on foreign datasets is discussed in [S25]; the importance of local data relevance is emphasized in [S24].
MAJOR DISCUSSION POINT
Concern that many Indian AI tools rely on global datasets rather than Indian patient data
P
Padmini Vishwanath
4 arguments143 words per minute451 words188 seconds
Argument 1
AI evaluation should include qualitative dimensions such as empathy, dignity, and care
EXPLANATION
Padmini stresses that beyond quantitative metrics, AI systems should be assessed for their impact on empathy, dignity, and the human aspects of care, especially in sensitive areas like palliative care.
EVIDENCE
She notes a shift toward evaluating AI on qualitative outcomes such as empathy and dignity, citing a palliative-care pilot that examines how AI changes caregiver-patient dynamics [353-356].
MAJOR DISCUSSION POINT
AI evaluation should include qualitative dimensions such as empathy, dignity, and care
DISAGREED WITH
Abhay Soi, Tanvi Lall
Argument 2
Upcoming focus on qualitative outcomes, such as AI‑supported palliative care, will shape future deployments
EXPLANATION
Padmini foresees that future AI projects will increasingly prioritize qualitative impacts, using palliative‑care pilots as a model for integrating empathy and human connection into AI assessments.
EVIDENCE
She references the same palliative-care pilot, emphasizing the importance of qualitative dimensions like caregiver-patient interaction in future AI deployments [353-356].
MAJOR DISCUSSION POINT
Upcoming focus on qualitative outcomes, such as AI‑supported palliative care, will shape future deployments
Argument 3
AI frameworks should start with remote, low‑resource settings to ensure equity and trust
EXPLANATION
Padmini argues that AI readiness should be built first for the most remote and low‑resource health settings, developing frameworks that consider limited infrastructure, which in turn fosters trust and equity when later scaled up.
EVIDENCE
She describes developing readiness frameworks for remote settings, assessing frontline capabilities and device availability, which leads to higher provider trust and equity when scaled [320-324].
MAJOR DISCUSSION POINT
AI frameworks should start with remote, low‑resource settings to ensure equity and trust
Argument 4
Normative frameworks and guidance are required to ensure AI is deployed equitably and ethically across diverse health settings.
EXPLANATION
Padmini stresses the need to create standards and normative guidance that address equity, ethics, and contextual relevance, especially when adapting AI tools from high‑resource to low‑resource environments.
EVIDENCE
She mentions the work of developing norms and normative guidance to ensure AI is equitable and moves in the right direction, particularly for remote settings [316-319].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ethical AI governance and the need for adaptive regulation are discussed in [S28] and [S39]; the principle of cautious, purpose-driven AI rollout is reinforced in [S38].
MAJOR DISCUSSION POINT
Establishing norms and guidelines is key to equitable AI deployment
T
Tanvi Lall
4 arguments190 words per minute810 words254 seconds
Argument 1
Education and stakeholder engagement are essential to build trust in AI solutions
EXPLANATION
Tanvi highlights that building trust requires systematic education, awareness‑raising, and continuous engagement with end‑users, not just one‑off demos, to embed AI into everyday workflows.
EVIDENCE
She explains that successful adoption involves educating stakeholders, providing ongoing support, and moving beyond pilot projects that are abandoned after a few months, emphasizing a transformation journey rather than a single demo [286-295].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Effective AI rollout requires staff training and stakeholder buy-in as part of workflow redesign [S30]; building trust through iterative scaling is highlighted in [S33].
MAJOR DISCUSSION POINT
Education and stakeholder engagement are essential to build trust in AI solutions
AGREED WITH
Abhay Soi, Jigar Halani
DISAGREED WITH
Padmini Vishwanath, Abhay Soi
Argument 2
Successful AI adoption demands education, awareness, and transformation beyond a one‑time demo
EXPLANATION
Tanvi reiterates that AI implementation must be treated as a long‑term transformation, requiring continuous education and stakeholder buy‑in rather than isolated pilot demonstrations.
EVIDENCE
She points out that many pilots succeed initially but are later ignored because they are not integrated into workflows, underscoring the need for sustained education and awareness [286-295].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sustained education and continuous improvement are needed to move beyond pilot projects, as described in scaling AI beyond pilots [S33]; workflow redesign that embeds learning is noted in [S30].
MAJOR DISCUSSION POINT
Successful AI adoption demands education, awareness, and transformation beyond a one‑time demo
Argument 3
Voice‑first, multilingual AI solutions can bridge equity gaps in underserved populations
EXPLANATION
Tanvi argues that AI designed to be voice‑first and support multiple low‑resource languages can address inequities by reaching populations with limited literacy or digital access.
EVIDENCE
She notes that AI can be built for regional, low-resource languages and designed as voice-first, which helps build trust and bridges equity gaps for underserved beneficiaries [278-283].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multilingual, voice-first AI designs for low-resource settings are discussed in [S18]; consumer-intelligence-driven AI that addresses language diversity supports equity in [S24].
MAJOR DISCUSSION POINT
Voice‑first, multilingual AI solutions can bridge equity gaps in underserved populations
Argument 4
AI‑ready data infrastructure and open data sharing are critical for scaling AI solutions and fostering collaboration among stakeholders.
EXPLANATION
Tanvi highlights that making high‑quality health data available to partners, such as Mosby, enables broader AI development and prevents data silos, which is essential for personalization and large‑scale impact.
EVIDENCE
She points out that institutions are providing data back to the community, citing the example of Mosby making statistical datasets publicly available, which supports AI personalization [351-353].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open, AI-ready data ecosystems that enable scaling are highlighted in [S33]; the need for a data-sharing culture to support AI is mentioned in [S1].
MAJOR DISCUSSION POINT
Open, AI‑ready data ecosystems enable scalable and collaborative AI innovation
A
Audience member 1
1 argument186 words per minute135 words43 seconds
Argument 1
Edge deployment is necessary for low‑connectivity AI use cases, while cloud offers scalability but raises cost and data‑privacy concerns; hosting AI services within India can address data‑sovereignty issues.
EXPLANATION
The audience member questions whether AI models for voice and multilingual translation should run on edge devices or in the cloud, highlighting that remote scenarios need edge processing, whereas cloud provides broader capabilities but introduces higher costs and privacy risks. They also suggest that locating servers in India would help with data‑sovereignty.
EVIDENCE
The participant asks if voice-language solutions should be on-edge or cloud, mentions cost and privacy considerations, and wonders about hosting models on Indian servers to keep data local [361-382][365-381].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Privacy and data-sovereignty concerns for cloud deployments are examined in [S28]; the “make haste slow” principle advises careful architectural choices in [S38].
MAJOR DISCUSSION POINT
Deployment architecture (edge vs cloud) affects cost, privacy, and data sovereignty for AI services
Agreements
Agreement Points
AI is essential to meet future healthcare demand and improve operational efficiency
Speakers: Abhay Soi, Vikalp Sahni, Dr. Rajendra Pratap Gupta, Nikhil Dhongari
AI enhances operational efficiency by providing predictive analytics for bed availability and safety monitoring AI must be a core KRA for hospital leadership, not just a hype project Digital health timelines are accelerating; three‑year horizons are now realistic Scaling AI requires robust Indian data, public‑private collaboration, and continuous model refinement
All speakers agree that AI adoption is becoming a strategic priority to handle increasing health needs, improve efficiency (e.g., predictive bed management) and must be embedded in leadership goals, especially as demographic pressures rise and timelines shorten [128-144][60-66][312-314][203-210].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with discussions at the World Economic Forum highlighting AI as critical infrastructure for health service continuity and with research emphasizing AI’s potential to reduce health inequalities and improve outcomes [S75][S76].
Robust data infrastructure is the foundation for effective AI in health
Speakers: Abhay Soi, Dr. Rajendra Pratap Gupta, Nikhil Dhongari
Creation of a 15‑year patient data lake and real‑time EMR to enable AI ABDM’s digital ID ecosystem provides interoperable backbone for nationwide health data Federated architecture enables Indian‑specific AI models and reduces bias
Abhay describes a 15-year data lake feeding real-time EMR, Dr. Gupta highlights 860 million ABHA IDs forming an interoperable backbone, and Nikhil points to the federated architecture that lets Indian models be trained locally, showing consensus on the need for strong, shared data foundations [14-15][184-186][203-210].
POLICY CONTEXT (KNOWLEDGE BASE)
Emphasized in reports on massive infrastructure needs for AI, calling for robust data strategies, protection measures, and scalable compute resources to support health AI deployments [S68][S73][S52].
AI deployment must be governed by strict supervision, ethical oversight, and patient‑safety safeguards
Speakers: Abhay Soi, Dr. Rajendra Pratap Gupta
AI must operate under strict supervision to ensure patient safety and data privacy Ethical prescribing practices and regulation are the main barriers to wider AI uptake
Both speakers stress that without rigorous oversight-covering safety, privacy, and ethical prescribing-AI cannot be widely adopted, emphasizing the need for supervision and regulation [52-59][407-414].
Trust, cultural acceptance, and stakeholder education are critical for AI adoption
Speakers: Abhay Soi, Jigar Halani, Tanvi Lall
Institutional culture requires circumspection, learning curves, and redesign of work processes for AI integration A shift in mindset—from skepticism to acceptance—is critical for AI adoption across staff Education and stakeholder engagement are essential to build trust in AI solutions
Abhay notes the need for a cautious learning curve, Jigar highlights a mindset shift toward acceptance, and Tanvi underscores continuous education and stakeholder engagement as essential to build trust, indicating shared view on cultural factors [90-98][333-337][286-295].
POLICY CONTEXT (KNOWLEDGE BASE)
Recognized as key barriers in healthcare AI adoption, where trust, clear communication, and cultural acceptance are essential for stakeholder uptake [S57][S74][S59].
Policy and regulatory frameworks must evolve in step with AI adoption to balance speed and safety
Speakers: Vikalp Sahni, Dr. Rajendra Pratap Gupta, Nikhil Dhongari
Need to balance rapid AI adoption with policy and regulatory challenges Digital health timelines are accelerating; three‑year horizons are now realistic Behavioral change, mandatory data capture, and tough policy decisions are needed to feed AI pipelines
Vikalp raises the need for policy alignment, Dr. Gupta notes faster policy cycles, and Nikhil stresses decisive policy actions and behavioral change, all agreeing that enabling environments must keep pace with AI rollout [30-33][312-314][416-426].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects the need for agile yet deliberate regulation, balancing rapid innovation with cautious governance as highlighted in multiple policy debates [S54][S58][S56][S60].
Similar Viewpoints
Both emphasize that a comprehensive, longitudinal data foundation—whether a centralized data lake or a federated architecture—is essential for building relevant AI models for India [14-15][203-210].
Speakers: Abhay Soi, Nikhil Dhongari
Creation of a 15‑year patient data lake and real‑time EMR to enable AI Federated architecture enables Indian‑specific AI models and reduces bias
Both agree that ethical oversight and strong regulatory mechanisms are prerequisite for safe AI deployment in healthcare [52-59][407-414].
Speakers: Abhay Soi, Dr. Rajendra Pratap Gupta
AI must operate under strict supervision to ensure patient safety and data privacy Ethical prescribing practices and regulation are the main barriers to wider AI uptake
Both highlight that building trust requires cultural change, education, and ongoing stakeholder engagement rather than one‑off pilots [333-337][286-295].
Speakers: Jigar Halani, Tanvi Lall
A shift in mindset—from skepticism to acceptance—is critical for AI adoption across staff Education and stakeholder engagement are essential to build trust in AI solutions
Both recognize that policy cycles are speeding up and must be aligned with AI rollout to avoid bottlenecks [30-33][312-314].
Speakers: Vikalp Sahni, Dr. Rajendra Pratap Gupta
Need to balance rapid AI adoption with policy and regulatory challenges Digital health timelines are accelerating; three‑year horizons are now realistic
Unexpected Consensus
Trust and equity must be built through both technical accuracy and qualitative human‑centred evaluation
Speakers: Jigar Halani, Padmini Vishwanath
Trust is built on model accuracy, contextual relevance, and continuous citizen feedback AI evaluation should include qualitative dimensions such as empathy, dignity, and care
A technical leader (Jigar) and a WHO researcher (Padmini) converge on the idea that trust is not only about algorithmic performance but also about qualitative human outcomes, an alignment that bridges technical and policy/ethical domains [219-233][353-356].
POLICY CONTEXT (KNOWLEDGE BASE)
Consistent with EU’s human-centric AI principles and calls for verifiable, transparent models that ensure equity and trust through both quantitative performance and qualitative assessment [S50][S61][S64].
Overall Assessment

The panel shows strong convergence on several fronts: the necessity of AI for future health system efficiency, the centrality of robust data infrastructures, the imperative of ethical oversight and supervision, the pivotal role of trust and stakeholder education, and the need for policy frameworks that keep pace with technological change.

High consensus across clinical, technical, policy, and research perspectives, indicating a shared understanding that AI can only succeed if data, governance, trust, and regulatory environments are simultaneously advanced.

Differences
Different Viewpoints
Speed of AI adoption versus need for cautious, supervised rollout
Speakers: Vikalp Sahni, Abhay Soi
Need to balance rapid AI adoption with policy and regulatory challenges AI must operate under strict supervision to ensure patient safety and data privacy
Vikalp asks whether AI adoption can be accelerated and questions if hospitals are “going crazy” on AI, implying a push for faster rollout [30-33]. Abhay counters that AI adoption is still early, marked by many failures and requires extensive supervision and careful implementation before scaling [34-39][52-59].
POLICY CONTEXT (KNOWLEDGE BASE)
Ongoing tension noted in governance discussions, emphasizing “make haste slowly” and the structural mismatch between market speed and policy deliberation [S54][S56][S58][S60].
Primary barrier to AI uptake: ethical prescribing practices versus technological readiness
Speakers: Dr. Rajendra Pratap Gupta, Abhay Soi
Ethical prescribing practices and regulation are the main barriers to wider AI uptake AI must operate under strict supervision to ensure patient safety and data privacy
Dr. Gupta argues that unethical medical practices, especially irrational prescribing, are the biggest obstacle and that stronger regulation is needed for AI adoption [407-414]. Abhay focuses on technical challenges, failures, and the need for supervision of AI tools, without highlighting prescribing ethics as a primary barrier [52-59].
POLICY CONTEXT (KNOWLEDGE BASE)
Ethical considerations such as bias, privacy, and responsible prescribing are highlighted as non-negotiable constraints that can outweigh technical readiness in health AI deployment [S52][S50].
Evaluation focus: quantitative efficiency gains versus qualitative human‑centred outcomes
Speakers: Padmini Vishwanath, Abhay Soi, Tanvi Lall
AI evaluation should include qualitative dimensions such as empathy, dignity, and care AI enhances operational efficiency by providing predictive analytics for bed availability and safety monitoring Education and stakeholder engagement are essential to build trust in AI solutions
Padmini stresses the need to assess AI on empathy, dignity and care, especially in palliative-care pilots [353-356]. Abhay and Tanvi primarily discuss efficiency gains, predictive analytics, and workflow improvements, focusing on quantitative benefits [22-23][84-88][286-295].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on measurement approaches stress the importance of qualitative metrics like user satisfaction and human-centred impact alongside traditional efficiency metrics [S65][S64][S59].
Sources of trust in AI: doctor‑patient trust versus model accuracy and feedback loops
Speakers: Vikalp Sahni, Jigar Halani
Trust is the most important factor; innovation must be balanced with existing doctor‑patient trust Trust is built on model accuracy, contextual relevance, and continuous citizen feedback
Vikalp highlights that patients trust doctors more than new technologies and questions how to balance this trust with AI solutions [106-112]. Jigar argues that trust comes from accurate results, contextual relevance, and feedback mechanisms, emphasizing a shift in mindset rather than existing trust structures [219-237][333-337].
POLICY CONTEXT (KNOWLEDGE BASE)
Survey evidence shows patients place higher trust in clinicians than AI, while trust in AI hinges on accuracy, transparency, and feedback mechanisms [S63][S61][S64].
Unexpected Differences
Qualitative versus quantitative metrics for AI success
Speakers: Padmini Vishwanath, Abhay Soi, Tanvi Lall
AI evaluation should include qualitative dimensions such as empathy, dignity, and care AI enhances operational efficiency by providing predictive analytics for bed availability and safety monitoring Education and stakeholder engagement are essential to build trust in AI solutions
Most participants focus on efficiency, predictive analytics, and workflow improvements, while Padmini uniquely emphasizes qualitative outcomes like empathy and dignity, revealing an unexpected split in evaluation priorities [353-356][22-23][286-295].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for broader success criteria that include adoption rates, satisfaction surveys, and ecosystem development beyond pure productivity numbers [S65][S67][S64].
Edge versus cloud deployment for AI services
Speakers: Audience member 1, Jigar Halani
Edge deployment is necessary for low‑connectivity AI use cases, while cloud offers scalability but raises cost and data‑privacy concerns Cloud vs. edge deployment decisions affect cost, privacy, and latency for AI services
The audience raises a technical deployment question about where to host multilingual voice models [361-382][365-381], while Jigar provides a nuanced answer that depends on use case, highlighting a nuanced disagreement not anticipated in the broader strategic discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Decision-making between edge and cloud is framed around privacy, latency, and scalability, with recommendations to distribute compute across devices and edge nodes [S69][S70][S55].
Overall Assessment

The discussion reveals several points of contention: the desired speed of AI rollout versus the need for careful, supervised implementation; differing views on whether ethical prescribing or technical readiness is the main barrier; contrasting emphases on quantitative efficiency gains versus qualitative human‑centred outcomes; and varied perspectives on how trust should be built (doctor‑patient trust versus model accuracy and feedback). While participants share common goals—improving healthcare delivery through AI, building robust data infrastructures, and fostering trust— they diverge on the pathways to achieve them.

Moderate to high. The disagreements are substantive enough to affect policy and implementation strategies (e.g., pacing, regulatory focus, evaluation metrics), but they do not fracture the overall consensus that AI is essential for future healthcare. The implications are that coordinated governance, clear regulatory frameworks, and balanced metrics will be needed to align stakeholders and move forward effectively.

Partial Agreements
Deepak asks if AI is now a KPI for CEOs, and Abhay confirms it is clearly a priority [60-66][67-68].
Speakers: Deepak Tuli, Abhay Soi
AI should be a core KRA for hospital leadership, not just a hype project
Both emphasize the importance of a robust, long‑term data foundation – Abhay describes a 15‑year data lake [14-15], while Nikhil points to ABDM’s federated architecture for local model training [203-210].
Speakers: Abhay Soi, Nikhil Dhongari
Creation of a 15‑year patient data lake and real‑time EMR to enable AI Federated architecture enables Indian‑specific AI models and reduces bias
Both note that without a culture of data sharing and mandatory digital capture, AI pipelines will lack quality data – Gupta mentions missing data culture despite 860 million IDs [398-401]; Nikhil stresses the need for behavioral change and policy mandates for data capture [416-426].
Speakers: Dr. Rajendra Pratap Gupta, Nikhil Dhongari
A strong data‑sharing culture is essential for effective AI deployment, but current practices lack sufficient data openness Behavioral change, mandatory data capture, and tough policy decisions are needed to feed AI pipelines
Tanvi argues that continuous education and stakeholder engagement are needed to move beyond one‑off demos [286-295]; Jigar observes a cultural shift where staff are increasingly believing AI is ready, highlighting the importance of mindset change [333-337].
Speakers: Tanvi Lall, Jigar Halani
Education and stakeholder engagement are essential to build trust in AI solutions A shift in mindset—from skepticism to acceptance—is critical for AI adoption across staff
Takeaways
Key takeaways
AI is moving from a buzzword to a strategic priority and should be embedded as a core KPI for hospital leadership. Robust digital foundations—such as a 15‑year patient data lake, real‑time EMR, and the ABDM digital ID ecosystem—are essential enablers for AI in healthcare. Trust, safety, and ethical supervision are non‑negotiable; AI must augment clinicians rather than replace them, especially in high‑risk decisions like emergency triage. Institutional culture and mindset shifts are required; AI adoption demands redesign of work processes, extensive staff education, and continuous feedback loops. Future scaling of AI hinges on demographic pressures, predictive health models, and public‑private collaboration to extend care beyond existing infrastructure. Localization, multilingual voice interfaces, and equity‑first design (starting with low‑resource settings) are critical to ensure AI serves all population segments. Qualitative outcomes—empathy, dignity, patient‑caregiver interaction—are gaining attention alongside traditional quantitative metrics.
Resolutions and action items
Hospitals should elevate AI adoption to a formal KRA for CEOs and senior leadership. Accelerate collection of Indian patient data and mandate electronic capture of clinical information to feed AI pipelines. Implement AI solutions first as assistive safety tools (e.g., predictive ECG alerts) before expanding to efficiency‑driven use cases. Develop and deploy voice‑first, multilingual AI interfaces to improve accessibility in remote and low‑resource settings. Encourage public‑private partnerships to share anonymized datasets and co‑develop Indian‑centric AI models. Institute continuous education programs for clinicians, nurses, and administrators to build AI literacy and trust.
Unresolved issues
Clear regulatory frameworks and standards for AI validation, supervision, and liability remain under‑developed. How to balance rapid AI rollout with stringent patient‑privacy and data‑security requirements, especially regarding cloud vs. edge deployment. The extent to which existing AI tools rely on global datasets versus Indian data is unclear; mechanisms to ensure Indian‑centric model training are needed. Strategies for enforcing ethical prescribing practices and reducing over‑prescription through AI‑enabled monitoring are not yet defined. Operational pathways for integrating AI alerts into existing clinical workflows without causing alert fatigue were not fully addressed. Funding models and incentives for sustained AI adoption across both public and private sectors were not concretized.
Suggested compromises
Adopt a phased approach: prioritize safety‑critical AI applications with strong supervision, then expand to efficiency and patient‑experience use cases. Use AI as an assistive decision‑support tool rather than a replacement for clinicians, maintaining human oversight while leveraging algorithmic insights. Combine cloud infrastructure for heavy model training with edge processing for latency‑sensitive, low‑bandwidth scenarios, balancing cost, privacy, and performance. Encourage pilot projects with clear exit criteria and pathways to scale, ensuring that demos transition into embedded workflow solutions. Implement mandatory data capture policies while providing transitional support for clinicians accustomed to paper‑based processes.
Thought Provoking Comments
When you don’t interface with technology, but the experiences are improved – that is the true test of AI.
Highlights the ideal of seamless AI integration where users benefit without noticing the underlying complexity, shifting focus from flashy tech to real patient outcomes.
Set the tone for the discussion on practical AI adoption, prompting Vikalp to ask about real‑world challenges and leading others to emphasize hidden‑technology benefits rather than visible hype.
Speaker: Abhay Soi
We’ve had a lot of failures… like Edison, we’ll run out of excuses and failures before we finally succeed.
Frames failure as a necessary step toward innovation, encouraging a culture of experimentation rather than fearing setbacks.
Encouraged participants to share their own setbacks (e.g., ICD‑11 tagging) and opened the conversation to talk about the learning curve and the need for resilience in AI projects.
Speaker: Abhay Soi
Trust is the most important thing. Innovation is important, but patients will only trust the doctor they know.
Introduces the human‑centric barrier to AI adoption, shifting the debate from technology capability to patient‑doctor relationship dynamics.
Prompted Abhay to discuss safety‑first AI use cases (ECG example) and led to later remarks about building trust through transparent, context‑specific solutions.
Speaker: Vikalp Sahni
An ECG AI tool could flag a patient for admission even when the cardiologist sees a normal ECG – it can prevent missed heart attacks.
Provides a concrete, high‑stakes clinical scenario where AI augments clinician judgment, illustrating the safety‑first approach.
Shifted the conversation from abstract benefits to a tangible use‑case, reinforcing the earlier point about trust and safety, and influencing later discussion on regulatory oversight.
Speaker: Abhay Soi
The national health policy now explicitly mentions both private and public sectors – we must break the barrier between them to deliver care.
Marks a policy turning point, showing governmental recognition that health delivery is a unified ecosystem, not siloed.
Redirected the dialogue toward systemic integration, prompting participants to consider how AI solutions should serve both sectors and influencing later comments on unified standards.
Speaker: Dr. Rajendra Pratap Gupta
We are moving from purely quantitative AI metrics to discussing qualitative dimensions like empathy, dignity, and care.
Expands the evaluation of AI beyond accuracy and efficiency to human values, urging a more holistic assessment of technology impact.
Broadened the scope of the discussion, leading to reflections on patient experience, trust, and the ethical design of AI systems.
Speaker: Padmini Vishwanath
AI must be personalized, context‑specific, and voice‑first; building transformation means educating users and embedding solutions into workflows, not just one‑off pilots.
Emphasizes that successful AI adoption requires cultural change, user education, and deep integration, not just technical deployment.
Steered the conversation toward implementation challenges, influencing later remarks about data readiness, behavioral change, and the need for sustained engagement.
Speaker: Tanvi Lall
Overall Assessment

The discussion pivoted around three core insights: the invisible yet impactful nature of AI, the centrality of trust and safety, and the necessity of systemic, policy‑driven integration. Abhay’s remarks about seamless experience and learning from failure laid the groundwork for Vikalp’s trust‑centric challenge, which in turn prompted concrete safety examples and policy reflections from Dr. Gupta. Padmini’s shift to qualitative outcomes and Tanvi’s focus on personalization and workflow integration deepened the conversation, moving it from hype to actionable strategy. Collectively, these key comments redirected the panel from abstract enthusiasm to a nuanced, human‑focused roadmap for AI adoption in Indian healthcare.

Follow-up Questions
Is there an AI adoption wave happening in hospitals, and what challenges (regulatory, policy, implementation) are hindering faster adoption?
Understanding barriers to AI adoption is crucial for creating strategies that accelerate integration of AI into healthcare systems.
Speaker: Vikalp Sahni
Is AI now a priority (KRA) for hospital CEOs and operators, similar to earlier digitization priorities like billing and accreditation?
Clarifying AI’s strategic importance will influence leadership focus, resource allocation, and governance.
Speaker: Vikalp Sahni
How ready are health institutions (clinicians, nurses, staff) to keep pace with the rapid evolution of AI technology?
Assessing institutional readiness helps identify training, cultural, and process changes needed for successful AI deployment.
Speaker: Vikalp Sahni
What are the expected changes in Indian hospitals and healthcare over the next three to five years regarding AI adoption and impact?
Projecting near‑term developments guides planning, investment, and policy decisions.
Speaker: Vikalp Sahni
How did the ABDM (Ayushman Bharat Digital Mission) originate, what has worked, what challenges remain, and how will digital documentation affect clinical decision‑making?
Learning from ABDM’s evolution can inform future digital health initiatives and interoperability efforts.
Speaker: Deepak Tuli (to Dr. Rajendra Pratap Gupta)
How can the lessons from ABDM implementation in the public sector be translated to the private sector, and how can AI be embedded deeper into private hospital workflows?
Bridging public‑private gaps is essential for nationwide AI impact and consistent patient experiences.
Speaker: Deepak Tuli (to Nikhil Dhongari)
What approaches can Indian AI model builders use to build trust among physicians and operators so that solutions are adopted in practice?
Trust is a key barrier; identifying mechanisms to earn it will improve uptake of AI tools.
Speaker: Deepak Tuli (to Jigar Halani)
Should voice‑based AI services (e.g., multilingual translation) be deployed on the cloud, on‑device (edge), or via a hybrid architecture, and what are the implications for cost, latency, and data privacy?
Infrastructure decisions affect scalability, accessibility in low‑resource settings, and compliance with data sovereignty rules.
Speaker: Audience member 1 (prompted by Deepak)
For a multilingual voice‑translation AI solution covering 22 Indian languages, where is the optimal hosting location (edge device, mobile, hybrid, or local Indian cloud) and what synchronization strategy should be used?
Choosing the right deployment model is critical for performance, user experience, and regulatory compliance.
Speaker: Audience member 1
To what extent do Indian AI healthcare tools rely on Indian patient data versus global datasets, and how can the reliance on locally sourced data be increased?
Local data improves model relevance and reduces bias; understanding current reliance informs data‑collection strategies.
Speaker: Audience member 2
Are we lagging behind as a country in digital health, and what major outcomes should we expect in the next year if current trajectories continue?
Evaluating national progress helps set realistic goals and identify policy or investment gaps.
Speaker: Deepak Tuli
How many AI models are currently being trained on Indian health data, and what behavioral changes among clinicians are needed to generate sufficient high‑quality data for model training?
Quantifying model development and addressing clinician adoption are essential for building effective, unbiased AI systems.
Speaker: Nikhil Dhongari
How can AI tools be designed to reflect the diversity of contexts, capabilities, and care models across different countries and regions, especially for low‑resource settings?
Ensuring AI equity requires adaptable designs that consider varied infrastructure, language, and cultural factors.
Speaker: Padmini Vishwanath
What research is needed to develop affordable, accurate ICD‑11 tagging solutions for Indian health records?
Current ICD‑11 tools are expensive or ineffective; affordable solutions would enable better coding, analytics, and reimbursement.
Speaker: Abhay Soi
What governance, supervision, and safety frameworks are required to ensure AI‑driven clinical decision support maintains patient safety and data privacy?
Healthcare AI must operate within strict safety and privacy standards to protect patients and gain regulatory approval.
Speaker: Abhay Soi
How does the introduction of AI affect patient trust in doctors and institutions, and what strategies can maintain or enhance that trust?
Trust is foundational in healthcare; understanding AI’s impact on trust informs communication and implementation strategies.
Speaker: Vikalp Sahni (also discussed by Abhay Soi)
What are effective methods to create AI‑ready data ecosystems, including data sharing, anonymization, and feedback loops, to support model development and continuous improvement?
High‑quality, accessible data is the backbone of AI; establishing robust pipelines is essential for scalability.
Speaker: Tanvi Lall (also Jigar Halani)
How can AI be leveraged for predictive health, home‑care, and scaling doctor expertise to meet future demographic demands in India?
Predictive and remote care can address the looming shortage of healthcare infrastructure and workforce.
Speaker: Abhay Soi
What qualitative dimensions (empathy, dignity, caregiver‑patient interaction) should be measured when evaluating AI interventions in health, especially in sensitive areas like palliative care?
Beyond accuracy, AI’s effect on human aspects of care is critical for ethical and patient‑centered implementation.
Speaker: Padmini Vishwanath
How should policy differentiate or unify approaches for public versus private healthcare sectors regarding AI integration and digital health standards?
Policy that bridges public‑private divides can ensure consistent standards and equitable access to AI benefits.
Speaker: Dr. Rajendra Pratap Gupta
What are the cost, connectivity, and privacy considerations for deploying voice AI solutions in remote, low‑resource environments, and is edge‑only deployment feasible?
Understanding practical constraints informs technology choices that can reach underserved populations.
Speaker: Jigar Halani (also audience)
How can unethical prescribing practices be detected and regulated using AI, and what governance mechanisms are needed to enforce ethical behavior?
AI can flag prescribing anomalies, but effective regulation is required to improve clinical practice.
Speaker: Dr. Rajendra Pratap Gupta
What strategies can encourage clinicians to shift from paper‑based to digital documentation, and what policy or enforcement actions are needed to ensure data capture for AI training?
Digital data capture is essential for AI; behavioral and policy interventions are needed to overcome resistance.
Speaker: Nikhil Dhongari

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI Without the Cost Rethinking Intelligence for a Constrained World

AI Without the Cost Rethinking Intelligence for a Constrained World

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by highlighting that the rapid AI adoption has led to a scramble for GPU-based infrastructure, often without considering whether applications are built on optimized architectures [3-7]. Bernie argued that reducing computational complexity would allow AI workloads to run on CPUs, edge devices, or even mobile hardware, a step he says is being overlooked in current software development practices [12-17]. He noted that decades of optimization expertise from Oracle and its partner STEM Practice Company provide mathematical methods to lower algorithmic complexity and infrastructure cost [30-31].


Anshumali introduced dynamic sparsity, a technique that selects only the necessary parameters per input rather than performing full matrix multiplication, thereby cutting compute while preserving scaling laws [102-107]. He warned that model parameter growth outpaces GPU memory and compute advances, creating a “memory wall” that will make future large models slower unless new algorithms are adopted [95-98]. To break this plateau, he described a new attention formulation that, when run on CPUs, outperforms GPU-based flash attention for very large context windows, because the algorithm reduces the quadratic cost of attention [152-158].


Kenny reported that their AI-MSET technology can lower compute costs by three orders of magnitude and provide early-warning prognostics for CPUs and GPUs, preventing costly downtime in data centers [178-184][188-194]. He demonstrated a specific use case where MSET reduced anomaly-detection expenses by a factor of 2,500, illustrating the potential for massive energy and cost savings [199-203]. Kevin emphasized that conventional probabilistic LLMs suffer from hallucinations, so deterministic AI architectures that guarantee repeatable outputs are needed for safety-critical domains [390-403]. Ayush explained that generic chat-GPT solutions lack enterprise data context and cannot meet the reliability required for decision-making, prompting the need for domain-specific models and governance frameworks [362-369][371-378].


The panel agreed that robust governance must address data sovereignty, GDPR/DPDPI compliance, and false-alarm mitigation, especially when scaling sensor-driven AI systems [413-418][429-435]. Bernie linked these technical challenges to environmental sustainability, arguing that avoiding massive GPU clusters reduces power, water, and heat footprints, and that more efficient algorithms constitute a responsible AI path [75-78][305-314]. He also mentioned the company’s upcoming quantum-enablement center as a longer-term avenue for energy-efficient computation, while acknowledging that quantum hardware remains a distant solution [635-637]. The discussion concluded that advancing AI responsibly will require integrating mathematical optimization, new algorithmic designs, deterministic models, and sustainable hardware to meet both enterprise needs and societal constraints [12-17][152-158][390-403][75-78].


Keypoints


Major discussion points


Infrastructure cost and the need for optimization – The panel opened by stressing that AI development is dominated by expensive GPU-based clusters and that many projects skip the usual software-level optimization steps, leading to wasteful power consumption and environmental impact.  ([3-8], [12-17], [20-22], [75-78])


Algorithmic and architectural innovations to reduce compute – Several speakers highlighted research on dynamic sparsity, mixture-of-experts, and especially new attention mathematics that break the quadratic scaling of long-context windows, allowing CPUs to outperform GPUs for very large contexts.  ([100-108], [112-119], [124-133], [152-158], [162-169], [170-176])


Energy-saving prognostics (MSET/AI-MSET) and massive compute reductions – Kenny described the AI-MSET suite that predicts hardware failures weeks in advance and cuts inference cost by orders of magnitude (up to 2 500× in a case study), while also eliminating false-alarm cascades in sensor-rich systems.  ([178-184], [188-194], [199-203], [429-441])


Governance, reliability and deterministic AI – Kevin and Ayush explained why probabilistic LLMs are unsuitable for regulated domains, proposing deterministic AI architectures, strict auditability, and enterprise-level data-governance (GDPR, DPDP-I) as essential safeguards.  ([390-401], [402-404], [362-381], [413-421])


Societal and educational implications – The conversation turned to how AI tools affect students and professionals, the risk of over-reliance on “hallucinating” models, and the need for curricula and policy that balance convenience with critical thinking and reliability.  ([623-627], [625-632], [629-632], [560-564])


Overall purpose / goal


The panel was convened to educate the audience about the hidden costs of the current GPU-centric AI boom, showcase mature mathematical and systems-level techniques that can dramatically lower those costs, and discuss how such efficiencies intersect with sustainability, governance, and real-world deployment challenges. The speakers repeatedly urged participants to adopt these alternative methods before the industry’s resource consumption becomes unsustainable.


Overall tone and its evolution


Opening (0:00-10:48) – Technical, urgent, and a bit critical: “we are just buying more GPUs… we are burning the planet.”


Middle (10:48-45:12) – Shifts to optimistic and solution-focused, with detailed explanations of novel algorithms, energy-saving case studies, and concrete governance frameworks.


Later (45:12-84:48) – Becomes collaborative and conversational, featuring jokes, personal anecdotes, and a promotional “let’s work together” vibe while still emphasizing responsibility.


Closing (84:48-end) – Returns to a call-to-action tone, urging the audience to adopt sustainable practices, consider policy implications, and visit the company’s booth.


Overall, the discussion moved from a problem-identification stance to a hopeful, solution-driven dialogue, punctuated by moments of humor and promotional outreach.


Speakers

Ayush Gupta – Representative from Genloop; focuses on agentic data analysis platforms, enterprise AI integration, and cost-effective inference solutions.


Kenny Gross – Senior Distinguished Scientist at Oracle; master machine learning technologist with a patent per day (≈365 patents) [S2][S3].


Kevin Zane – Speaker on AI sustainability; discusses energy-efficient AI and environmental impact.


Participant – Unnamed audience members who asked questions during the panel.


Bernie Alen – Founder and leader of STEM Practice Company; former head of advanced technologies market development at Oracle, now runs an Oracle-partner consultancy [S8].


Anshumali Shrivastava – Professor at Rice University; member of the Super Intelligence team for MEDA, expert in dynamic sparsity, context windows, and efficient attention mechanisms [S9].


Abhideep Rastogi – Representative from Tata Group (U.S.-based operations); works on AI-driven workflow automation and enterprise transformation. [S10]


Additional speakers:


Avi – Mentioned as a participant to take a question; no further details provided.


Full session reportComprehensive analysis and detailed insights


Opening remarks & problem framing (Bernie Alen)


Bernie Alen opened by warning that the rapid, almost indiscriminate acquisition of GPU-based infrastructure is inflating AI-related costs and bypassing optimisation steps that are standard in large-scale software development. He argued that most AI projects are built on “extensive infrastructure” without asking whether the applications are “optimised” and that the race to hoard GPUs is driven by fear of being left behind [3-8][12-17][20-22]. He linked this wasteful practice to a broader environmental threat, noting that the high-heat, high-failure-rate GPU clusters demand excessive power, water and cooling, thereby “hurting the planet” [75-78][310-312].


Alen positioned his company, the STEM Practice Company, as an Oracle partner that inherits decades of optimisation expertise from Oracle’s work with the world’s largest customers. He explained that Oracle has accumulated “a collection of intellectual property… to reduce complexity of algorithms, reduce computation and therefore create better infrastructure architectures” [23-31][30]. This heritage underpins the panel’s focus on mature mathematical methods that can lower AI infrastructure costs.


Technical solutions


Dynamic sparsity & new attention math (Anshumali Shrivastava)


Anshumali Shrivastava presented data showing that the exponential growth of large-language-model (LLM) parameters far outpaces the logarithmic growth of GPU memory and compute capacity, creating a “memory wall” that will make future models slower and less accessible [95-98]. He traced the evolution from full-matrix computation to static sparsity, then to “dynamic sparsity” – a technique that retains the full parameter set but selects, for each input, only the subset of computations that are actually needed, thereby respecting scaling laws while reducing work [100-108]. Shrivastava noted that mixture-of-experts is a “band-age” that still relies on GPUs built for dense matrix multiplication [112-119]. He identified the next competitive frontier as the context window, arguing that larger windows are essential for complex, common-sense reasoning but have plateaued around one-million tokens [124-133]. To break this plateau he introduced a new attention formulation that reduces the quadratic cost of attention; on CPUs this new math outperforms GPU-based flash-attention for very large contexts [152-158][162-169][170-176].


AI MSET for sensor analytics (Kenny Gross)


Kenny Gross described the AI MSET (Multivariate Sensor-Analytics Technique) suite, which predicts hardware failures weeks in advance and therefore avoids costly downtime in data-centre environments. He reported three-order-of-magnitude energy savings and the ability to detect failure mechanisms “in days and often weeks in advance of failure” [178-184][188-194]. A concrete use-case demonstrated a 2 500-fold reduction in compute cost for anomaly detection, illustrating the massive efficiency gains possible without large GPU clusters [199-203]. Gross also highlighted that MSET’s multivariate approach dramatically lowers false-alarm rates, a critical advantage when thousands of sensors generate noisy data [429-441]. He added that the work has been documented in a few dozen publications and presented at four NVIDIA GTC conferences [??-??].


Deterministic AI & auditability (Kevin Zane)


Kevin Zane argued that probabilistic LLMs are unsuitable for safety-critical domains because they produce non-deterministic outputs and hallucinations. He advocated “deterministic AI” architectures that guarantee the same output for the same input, thereby enabling auditability and eliminating hallucinations [390-403][404-405][??-??]. Zane stressed that false alarms in human-in-the-loop systems can cause cognitive overload and catastrophic accidents, underscoring the need for provably low false-alarm probabilities [429-435][440-441].


Enterprise-focused LLM deployment (Ayush Gupta)


Ayush Gupta explained why generic ChatGPT-style models cannot meet enterprise requirements. He pointed out that such models lack access to proprietary data and cannot capture the nuanced “context” of a specific business, making them unreliable for decision-making [362-369][371-378]. Gupta quantified the cost driver as GPU-heavy inference and argued that “cheaper inference” is achievable by hosting domain-specific models in-house, which can deliver high-quality insights at a fraction of a dollar per conversation [276-284][285-286].


Adoption roadmap & governance (Abhideep Rastogi)


Abhideep Rastogi outlined a structured, multi-stage roadmap for AI adoption. The process begins with defining the business aim, proceeds through data quality assessment, architecture selection (CPU vs GPU, on-prem vs hyperscaler), pilot execution, governance implementation, and finally production-scale platformisation [324-345][346-349]. This framework embeds compliance checks for GDPR, India’s DPDP-I and forthcoming AI-Act regulations, ensuring that AI projects are legally and ethically sound [413-421].


Integration & ecosystem questions


Plug-and-play / ecosystem integration


When audience members asked whether MSET and the other techniques could be integrated with existing LLM-based pipelines, Retrieval-Augmented Generation (RAG) services, and Managed Cloud Platform (MCP) offerings, the panel responded that these methods sit at a foundational layer and can be combined with current services without major re-architecting [??-??].


Policy-maker misunderstanding & EU charger analogy


Abhideep highlighted a common policy-maker misunderstanding by citing the EU-mandated C-type charger requirement, explaining that regulation often follows industry pressure rather than technical feasibility [??-??].


AGI vs. quantum computing


A participant asked whether artificial general intelligence (AGI) will only be possible with quantum computers. Bernie answered that while the upcoming quantum-enablement centre aims to demonstrate that quantum processors could eventually provide a fraction of the energy required for GPU-based simulation [??-??][635-638].


Broader implications


Sustainability


By moving workloads from GPU clusters to CPUs, edge devices or even mobile phones, organisations can dramatically cut power, water and cooling demands [75-78][310-312]. Kenny’s MSET example reinforced this point, showing that algorithmic efficiency can replace “expensive infrastructure” while also improving reliability [178-184][199-203].


Mathematics research relevance


A student asked whether AI advances render mathematics research obsolete. Anshumali responded that a strong mathematics background remains essential for understanding LLM capabilities and for formal reasoning about AI systems [??-??].


Legal-domain hallucinations


A legal-profession participant inquired about the reliability of AI for citations and case law. Kevin and Ayush explained that while 100 % accuracy is unrealistic, deterministic and auditable pipelines can flag low-confidence outputs for expert human review, mitigating the risk of hallucinated legal references [??-??][390-403][404-405].


Education & policy on AI use


The panel discussed the need for curriculum changes that emphasise problem-solving and creativity rather than rote reliance on AI tools. Concerns about academic cheating were raised, and the speakers agreed that AI should augment learning, not replace it, requiring updated pedagogical policies [??-??][625-632][606-618].


Closing & next steps


In closing, the speakers reiterated that AI development must be framed within a “constrained world” where cost, energy and governance cannot be ignored. They called on the audience to adopt the presented optimisation techniques, engage with the STEM Practice Company for pilot projects, and consider the broader policy landscape when deploying AI [456-460][462-470]. The panel expressed differing perspectives on how quickly deterministic, hallucination-free AI can be realised and on the relative emphasis between near-term software-centric optimisations versus longer-term quantum approaches [390-404][152-158][635-638].


Session transcriptComplete transcript of the session
Bernie Alen

Can you hear me better? Is this better? Okay. So, infrastructure cost, it’s a very important topic because everybody who is trying to create something in AI, we all know that we are running into having to use extensive infrastructure, right? And mainly it is a GPU -based infrastructure architecture. And last two, three years, I think we are not stopping to ask the questions that we would normally ask. Are we creating these applications in optimized infrastructure? We are just running around getting as many GPUs as possible because we’re all afraid that the other guy would get it and then we’ll be left out, right? So, I think it’s a very important topic. So, there has been an extremely rapid adoption of AI.

and everybody wants to have an AI answer for everything. And so we are not asking the questions that we would normally ask in any project of this scale, right? So we’re going to take a look at what are the optimization methods and good mathematics that’s existed for a long time that we should bring in to optimizing and reducing the computation needs that these models create, that these AI applications create. And if you reduce the complexity and if you reduce the computation, then you don’t need to run a lot of these things on expensive, high heat generating, high failure rate, limited supply. clustered GPUs. You can run this on CPUs. You can run this on clustered CPUs.

You can run it on edge computers. You can even run it on mobile phones and laptops. There is a software optimization step that everybody is skipping, that we would normally not skip in software development. For any large -scale software development, heavy amount of infrastructure optimization goes on. But we are not doing that in deploying these AI models. So we first want to make sure that there is enough understanding of the mechanisms and the methods that are available. And a lot of this is derived from mathematics that has existed forever. So we’re going to talk about that. I’ve got a great panel over here. By the way, just to introduce my company, we are the STEM practice company.

We are an Oracle Corporation partner company. and if you think about I think most people know about the Oracle Corporation, right? I don’t need to introduce the Oracle Corporation. If you think about Oracle, they have had to create solutions and create software and create products for very large customers. They serve the largest customers on the planet. So always they’ve had to worry about optimization, performance improvement and all that because without that infrastructure cost would just be so high. So over decades there has been a collection of intellectual property, collection of ideas, collection of methods to reduce complexity of algorithms, reduce computation and therefore create better infrastructure architectures, right? So STEM Practice Company is an independent company.

We run as an Oracle partner company, but the origins of the STEM Practice Company is within the Oracle Corporation. I led advanced technologies market development for Oracle. and then we separated as a separate company and we launched as a separate company two years ago. Now we operate as an Oracle partner company. Let me introduce the team here. This is a slide that my lawyer says I should show. So just because I paid the lawyer a lot of money to make this one slide up, I’m going to show this slide. Right? Nobody knows where we are going with AI, to be honest. Nobody knows where we are going with quantum. We’re all doing the best to predict what may come, but with any prediction, use your own logic.

That’s what my lawyer wants me to say. So I’ve said it. Okay, let’s go to the next one. So this is my panel, and you may not be able to see their names on the screen. So let me start with the gentleman from the Tata Group. We are a U .S.-based company. We launched two years ago, and we just started working. with our India operations, India opportunities, and we had the great fortune to start with a Tata company. And I think they are quite happy with what we have shown because some people say, hey, if you’re not using GPUs, not using expensive infrastructure, is there a compromise? Am I introducing more latency? Am I creating less confident output?

None of that. In fact, all of that gets better. And we were able to demonstrate that with the opportunity that we got working with Tata that we were able to show that we are getting 100 % accuracy and we have not used any GPUs at all in the infrastructure that we have proposed. Right? Okay. So that is Mr. Abhideep on the back with Tata. Say hi, wave. Okay. And the gentleman next to him is a part of a STEM practice company. He is from Oracle. I did steal him from Oracle. Oracle because it was essential to at least steal some people. before Oracle gets pissed off at me. So Kenny Gross is the senior scientist at Oracle, distinguished scientist at Oracle.

He has a patent for every day of the year. So his patent count is approaching 365. So that’s Kenny Gross, a master machine learning technologist. And next to him is a professor from Rice University, Anshu. Also serving on the super intelligence team for MEDA. And Anshu is very passionate about, as a professor, right? Professors are usually passionate. My father is a professor, right? So that’s why he lives here and I live in the U .S. I live that far away from him, okay? Just because that’s the way I can deal with this passion. But very passionate about, hey, all of these methods, these methods to do things better have existed. so let’s make sure that we are bringing those methods creating awareness for those methods and he’s going to talk about how he sees what the challenges are going to be and how we already have methods to go address the challenges that are coming up by the way this panel is going to be very interesting so all of you all can start texting two or three of your friends to start showing up here so we can spread the word more and next to him is somebody you all may already know it’s one of the top successful companies that is working with the foundations of AI that is shaping up in India it’s Ayush from Genloop and he will talk about it in a very much of an Indian context because what is he has a front row seat to everything that is going on and then the last person on the panel is this who I’m most proud of because he’s my nephew, and he is in IIT.

I went to Bitspalani, so that part I’m not proud of, that he goes to IIT and doesn’t go to Bits, right? By choice, right? So, and he is in IIT Madras, and he is working closely with us and learning deeply how to build some of these very complex AI methods up front, right? Okay, so we are a small enough team here, so always feel free to interrupt, raise your hand, come up, ask questions. The goal is to educate because I think we are going very fast, and we are spending a lot, and we are creating problems, like we need more power generation, and we need more power generation rapidly, and, because of that, we are, causing harm to the planet.

So it is good. You know, all this mathematics has existed for a long time, but it’s never been productized because there’s never been a market. But now there’s a phenomenal market for this, right? Because mathematicians are poor people, right? We have a paper and a pencil most of the time, right? But now we can productize it and we can bring these solutions to the market before we end up burning the planet. Right? Okay. So let’s go to… So these are what people are going to talk about as Professor Anshumali is going to talk about the problems that are coming up and how we have the solutions, demonstrated solutions, benchmark solutions to address the problems that are coming up.

Dr. Kenny Gross is going to talk about doing a large amount of real -time AI and… stream -based AI without using any neural networks even. Not just without using GPUs, but without using neural networks and getting a very high level of accuracy, very low rate of false warnings at a tiny fraction of the cost that everybody else is spending. Okay? And then we’ll have questions for the panel and then we’ll have questions for the audience. Right? But we have no problems in making this collaborative. So sometimes a question can’t wait till the end. So just raise your hand, ask a question, and we’ll talk about it. Okay. Now I’m going to turn it over to Professor Anshu to come here and talk about how we are already ready to address

Anshumali Shrivastava

the challenges that are coming up. Right? Thank you very much. Can you guys hear this? Okay. so I’m pretty sure you must all have heard of we need AI without the cost the cost is too much right there’s never enough GPUs how many of you have heard about solutions and how many of you have heard about yeah this is an idea that will definitely go into work or at least there is some merit to these ideas I think we are going to go into that so I think the first part that we need AI without cost is kind of obvious I’m not going to rant about it though I will talk about something that motivates why the problem that you are going to see is just going to get worse so I don’t know if you can see the plots here but what are these on the x axis is the year and on the y axis here is the parameter count of the LLMs now you see kind of two interpolated straight line the green ones is the amount of memory available in the GPUs right h hundreds a hundreds so on and so forth and the red ones are the memory or the model parameter count for the demand right that’s the models like gpt3 g chart switch transformer megatron etc what do we see here that the rate of growth of hardware and by the way it’s on logarithmic scale so it’s exponential the rate of growth of hardware is nowhere close to the rate of growth of demand the other plot that you cannot see is kind of similar but it’s in compute so it’s like in teraflops or petaflops what gpus can offer and what we need to reach to certain latency this was a famous paper from berkeley that says ai memory wall and what you should expect is if you are hoping that your latency will become better with algebraic l l m that’s not happening unless there is some breakthrough okay so models will get bigger, but they will not be able to cope up with the even the GPU growth, which means models will feel slower, the better models will feel slower and unaccessible, inachievable, right?

I mean, there are many models, I’m pretty sure you cannot even run in whatever infrastructure you have, this is going to get worse. That’s what this plot kind of says that. So clearly, there is a need for what we are talking here, right? So again, a little bit on the past work, one idea that was that is very popular. And basically, I’ve been working on it since 2016. And it’s kind of now catching up as a mainstream. One idea was why do we do full computations, let’s do sparse computation, and it’s not static sparsity, it’s dynamic sparsity. What is dynamic sparsity? Well, I need all the parameters. So I’m not throwing away, I’m not going against scaling laws.

But I will pick which ones I need based on my input or dynamically. And that is called dynamic sparsity, right? So So I’ve shown you two cartoons here. The traditional model is you do all the computation, which is what GPUs were built for, right? And the argument now is, well, you don’t do all the computation. You only do what is needed. But then GPUs are not kind of quite built for. But there is a sweet spot in between. You can do block sparsity something and get things around, which is what mixture of experts is, right? So mixture of expert is now the de facto way of training large language models. So one idea is obviously there.

But remember, the fundamental kernel of GPUs were always built for full matrix multiplication. And mixture of expert was kind of a bandage that seemed to work. But obviously, we need a lot more, as we have seen. So let’s take a pause here. We all have seen the evolution of model, right? Getting a foundation model, large parameter models with large capability. Where is this all going? What is the next race? and I want to argue here that the next phase is context window why? what is a context window? everybody is familiar with what a context window of an LLM is? it’s kind of a working memory right? so let’s say if I want to solve a simple problem like 2 plus 2 equals 2 that only requires a very simple context but let’s say I want to solve an Olympiad problem so you are asking me to prove a theorem and I generate 40 intermediate theorems I need to have all the theorem in my context to go to the next theorem otherwise if I miss any of the theorem if it goes out of context I cannot prove things so more context window means I can process more information correlate across and make decisions so complex workflows will start to happen when the context window grows and that is what we have seen with GPT’s right?

GPT 3 came up small context window as the context window is growing now we know cloud code kind of works because it has like what 200k context window or something right and even then I don’t know how many of you have experienced that you have to compactify the context because you run out of context window, right? What this plot shows is on the x axis, I don’t know if you can see it, is the ear and on the y axis is the context window. What do we see here? Almost a flat plateau after a while. And by the way, that’s also experimental. 10 million context window is experimental. The closest is 1 million. That is, you can achieve and play with it.

But it has plateaued. People are not talking about 100 million context window and more. And it is very clear to people that more complicated task means more complicated context window. And we believe this is what the next race is. At least I am very much bully on that this is what the next race is. You want to do complex automation, very complex automation, right? We talk about like building agentic workflows and all that. But I believe we are underestimating how much complex automation we want to do. And we believe that we are underestimating how much complex common sense is, right? Common sense workflows requires a lot of reasoning. and it will not happen unless we have large context windows.

But large context windows are plateauing and we are talking about some of the frontier models. So let me tell you what the current problem is. The mindset is, okay, the kernel remains the same, which is full matrix multiplication. Let’s apply bandages like mixture of expert and whatever, stretch as much as we can and see where we go. That’s strategy number one. That’s probably one strategy that we know of, seem to work, but has plateaued. We’ve seen in the previous plot, it has plateaued. What I am bully on is, we have to rethink to break that plateau. Okay. And again, like I’m not going to go very technical. This is an upcoming paper in ICLR. But I want to argue there is a new math, a new way of doing attention.

Again, I’m not going to start uttering words like sharpened softmax and like, like exponentiated all that. I’m going to start uttering words like you can read the paper it’s coming in iClear this year it’s going to be presented in Brazil in this summer but what we have shown is that if you change the math of attention then there is something which gives you the same capability but in a different cost so it’s changing the math rethinking the math like dynamic sparsity right it’s some sort of a sketched way of estimating things what is interesting is we have experimented this so if you see this plot on the x axis is the context window the y axis is the latency token the time to first tokens or token per second the two red plots are the best attention mechanism flash attention 2 flash attention 3 on the best possible hardware GH200 and the green one is actually the new math on a CPU now what is interesting is if the context window is below 131000 GPUs are obviously faster which makes sense but But as I go beyond that, the CPUs dominate.

And actually, it’s not the CPU. It’s the algorithm. And the reason is context windows scale quadratically in attention. So you can throw as much hardware as you want, but you cannot beat quadratic complexity. Right? You are throwing linear number of GPUs to tackle something that goes quadratically. So something goes like 10, 10 square, 100, 100 square, and you are just doubling things. That’s not going to work. That is what this kind of plot shows. It says something fundamental. So what we are trying, and again, this is what I argue. I’m not going to bore you with the math. But what we are trying to argue is the hope is, and remember the title of the talk, how. The how part is the rethink.

We have to rethink beyond how attention is done. Because in the current race, if you have 1 ,000 GPUs, if you have 10 ,000 GPUs, you are 10x ahead of that person. but that race is plateauing because of the quadratic complexity. So yes, you will always be ahead because you have more GPUs but not very far ahead. But if we change the math, then we can actually break that plateau and I believe we can unlock capabilities of the next level. We will see automation that hopefully we expect is possible. Again, I will say parameter count and benchmark hacking. We have seen it enough. We want to now see complex tasks happening. And it is my belief, again, I am an academic, so one of the things as an academic, you get to ask hard questions and you can think about it for a very long time.

So for me, the next race is can I break the barrier of how much complex tasks we can solve with the LLMs using this context window. And I believe if we can make progress there, that’s a very tangible real progress. so can we go to 100 million contacts faster than others I think we can and with that I would stop my

Kenny Gross

So the energy savings come from the three orders of magnitude lower compute costs. We’ve done four presentations with NVIDIA GTC conferences demonstrating with real data the reduction in compute costs. And then the other aspect that I wanted to mention in terms of data centers is the prognostic for avoiding downtime in servers and chips, CPUs and GPUs. We developed and published long ago a few dozen publications on the new AI MSET. MSET 3 is capable of detecting all mechanisms that cause CPUs and GPUs to fail. In data centers days and often weeks in advance of failure. This avoids downtime. Now in prior data centers five years ago. The downtime wasn’t a big deal because if you’re just doing web serving applications or even database applications, there’s a lot of horizontal redundancy.

With the new AI workloads, though, when a company is running a five -day training run with their LLM, system board failures are very costly for that. And one spinoff of MSET for data center applications is called electronic prognostics, where we’re able to detect all mechanisms that lead to failures of chips and system boards in the data centers. And the final point that I wanted to make with that bottom bullet there is I got data. What we always tell other industries, and MSET has been used in locomotives, wind farms, all aspects of utilities. All defense aspects, land, air, sea, and space. But if any company, whatever system that you’re using now, if you have data, historian data, we welcome doing a blind bake -off with your own data.

And whatever technique you’re using now, third -party commercial technique or homegrown technique, we’ll be happy to demonstrate with your own data in a bake -off where the winning criteria for the bake -off is lowest compute cost, earliest detection of incipient anomalies in the assets, and the lowest false alarm and missed alarm probabilities. With conventional approaches, it’s the false alarms that cause a lot of losses from shutting down assets unnecessarily, revenue -generating assets, and they’re not broken. And missed alarms can be catastrophic. And in most cases, they’re not broken. Life -critical industries. they can be extra catastrophic. So that’s an overview for our AI MSEP. I’ll turn it over now.

Bernie Alen

Okay. Thank you, Kenny. So in one of the use cases where we used the MSEP method to process anomaly detection, the cost of running the use case was 1 over 2 ,500. So it’s not just a 10x reduction or a 20x reduction. You’re talking about a reduction of 2 ,500 times, right? So that’s the power of these kinds of protocols. And so certainly before you guys start implementing… And whatever AI method you’re using, whatever solution you’re going after, educate yourself on these kinds of methods that exist. Right. Feel free to reach out to us. And not everything, you know, you need to go through a massive GPU cluster to be solved. Right. OK. So we’re going to go to the panel now and ask the panel some questions.

Questions. So I’m going to start with all the all the panelists over here. Right. And we can first talk about. Things have never been this crazy. Right. I mean, I think the last two years, two and a half years, the world has kind of gone mad in some ways. Because everybody is chasing this and everybody feels a sense of great urgency to chase this. Right. So how do you all see this? How do you all see this? I mean, I think there are a lot of challenges in AI. Maybe we’ll start with you, Abby, and then we’ll come down. closer to me.

Abhideep Rastogi

Sure. So what I’ve observed that in the recent past, if I take an example of the last two to three years, we started with the process of like Gen -H chatbot. That was a very big thing at that point. Now I can see the trend that everything is converting from Gen -H chatbot towards I would say a workflow automation where agent tech AI and agents are running on an executive level as well as on an enterprise tools where it is already executing the proper workflow which is supposed to be handled by a person or any particular code something like that. So it’s been automation which I can see in the current organization even when I talk to other clients also, they are also looking forward for these kind of things.

That’s my understanding. on this.

Bernie Alen

Kenny? Kenny, you want to comment on the same thing? What are the challenges that you’re seeing and how we are doing things now and how fast we are going? What is your prediction for what’s coming our way?

Kenny Gross

One of the early challenges with MSAT pattern recognition was getting the sensor signals out of the asset to a central location. And that challenge has been solved now for most industries and certainly for data center industries. The challenge in the early days when the two biggest locomotive manufacturers in the United States licensed MSAT, they had to bring a computer on the train to monitor the signals because there were not good techniques for offloading the signals from a locomotive. Now there are good wireless networks for bringing the sensors. And back to the data center. Thank you. we developed at Sun Microsystems computer system telemetry that picks up all the signals from all sensors and processes inside servers.

Voltages, temperatures, currents, fan speeds, in many cases vibrations are in the servers also. Thousands of variables. And we’ve made a very lightweight harness that doesn’t interfere with the customer’s compute capacity at all. It runs on the system processor, brings the telemetry out. So that challenge has been solved. And now with the latest GPU servers, there is a commercial system, Prometheus. And on December 15th, NVIDIA released freeware telemetry for all their servers and clusters. So that challenge has been solved. And we at STEM can show you how. to stream the signals from any asset, airplane engines, any asset, autonomous vehicles, into the compute box that is lightweight on CPUs, not GPUs, that gives real -time prognostics with early warning of incipient problems, not a high -low threshold.

That’s what they use now. By the time that something hits a high threshold, something is already severely wrong or the system crashed before it ever got to the threshold. We are able to detect the onset of anomalies below the noise floor. They’re in chaotic noise, and MSAT’s able to detect the onset of those. So that would be the challenge. If somebody doesn’t have sensors in their assets, they’re going to have to wait until next year’s model and put sensors in. But most assets now have lots of sensors. But do not have a good technique to… consume that data and give prognostics without having to train somebody to get a master’s degree. It works out of the box.

We hook up the sensor signals to the M -SET and get early warning enunciation of anomalies. And the energy savings is very significant because the control algorithms now have highly reliable signals going into them. M -SET’s the only technique that can disambiguate between sensor problems and problems in the assets. And so the control algorithms are using fully validated signals, and it’s much more efficient operation. And if anything starts to go wrong in the assets, you get an early warning of that.

Bernie Alen

Thank you, Kenny. Anshu, what’s your take?

Anshumali Shrivastava

So, I mean, again, as I already said, I’m very bully on long context. Let me give you an example, right? So By this time, we all know chess is easy, math is easy, programming is easy, right? I think common sense is very hard. I think common sense is very hard in short context for humans, easy in long context. So if I keep talking with you over a period of time, no matter how much I think or not, I’ll figure it out that you’re bored. It will take me some time, but I’ll figure it out. So over a long context, you need long context to figure that out. And I think machines are right now gaining context, but they are gaining it quadratically, which is what I talked about.

So I believe right now the biggest complaint in enterprises are agents do not have common sense. They hallucinate. They are not 99 % agent. They are like 50, 60 % agent. To go from 50, 60 to 99. you need that constant you are working with a human and over a period of time you figure out damn this guy needs this and that will happen when we will have really long context so I will just double down on what I just said I think it’s the next thing is efficiency and long context

Bernie Alen

very good and what do you think you are having a front row seat with everything that’s going on around here so you have not only the large context you have the relevant context too so tell me

Ayush Gupta

first of all thanks for the question and very good evening to all who have joined so we are in the space of unifying the entire data universe of an enterprise and providing an agentic data analysis platform so what that means is a normal business user who so far was used to just static dashboards can come on a system and have conversations get proactive insights and do better decision making faster so the most exciting part in that context for us is how can proactive decision making and the right quality of insights help improve in enterprises, top line, bottom line, efficiencies, etc. For instance, we have increasingly seen that the need for the warehouses, data warehouses, the big data warehouses and ATL pipelines that so far were required to be maintained will go down in future because so far everything had to come to a single source of truth table from where human analysts could actually query and get insights or power these power BI dashboards.

But now with agentic analysis, when they can connect with different data sources, different modalities, not just tables or PDFs, but also like images, presentations, documents, etc. you might not have the need to create multiple replicas copies and versions of the data set the bronze table silver tables gold tables etc you might just want to connect to those native systems of records directly and get the insights required we have seen that happening with a lot of our enterprise customers that they are able to see value when the agentic analysis is able to give their business users very good insights so that is the most exciting part for me how can we have data analysis give ROI to an enterprise and the challenge for that is exactly quality and reliability so how do you make sure those insights are of quality they are not just hey the sales are down but it is more about why are the sales down what are the next steps that you can take to fix them if you have not been able to achieve your incentives in your store or your targets in the store what is going wrong what are the other stores doing that you could learn from that and then do better and the other is a reliability of insights.

Like it’s not just getting it right 1 out of 10 times. It’s getting it right 10 out of 10 times. Even with questions that are less known or unseen and unlock value. And lastly I touched on the ROI point and that is where there is synergy with what STEM is doing. In US it’s still fine to charge roughly a dollar for one kind of an insight. If I do a rough mathematic that still comes out to be a decent enough ROI. When you are paying $125 ,000 to a data analyst for same insights. Like in case you have to hire one. But in India the cost has to come down even further. Like it has to be probably 1 rupee per conversation to actually unlock the same quality of insights.

And the major cost driver is the GPU. Like how do you have cheaper inference and that is where I’m excited about what you guys are doing at STEM. Like we are hosting our own models many a times. We are also one of the companies training SLMs to power this use case. So the exciting thing about us is can we have an alternate architecture that scales and gives us a very cheap cost of inference so that we can give the same technology at a much scalable use.

Bernie Alen

Very good. And I want to, before I go to you, I want to say that that’s why I think a lot of these solutions can be perfected in India. Because India is going to throw the toughest problems at us. We’ve got to solve these at a massive scale. India has more people than anywhere else. Everybody knows that. But India also has more mobile devices than India has people. So… And you talk about sensors. tens to hundreds of thousands of sensors all coming in from a very large population and you are telling me I need to give it to you for a rupee. Right? So India is going to throw the toughest problems at us and as we…

I saw this somewhere that it built in India but for the world. So I think if we solve them then I think we have wonderful solutions for everywhere else. Right? Okay. So what excites you? And you just got into IIT Madras and you are doing well. Thank you for that. Even though you didn’t go to bits like I asked you to. So go ahead and talk about what is exciting to you and where do you see the challenges are.

Kevin Zane

I think I’ll start with the challenges in this one. The challenge I’m going to talk about is sustainability of AI because that’s something that’s grown increasingly relevant as of late and as Anshu here said. Well, the challenge I’m going to talk about now is the sustainability of AI because, well, as Anshu here said, we’re rapidly approaching a hard limit on how scalable GPU -based infrastructure is and with the very large impact on the environment, on water and power and the amount that is required to fuel these GPU server stacks, I’m excited mostly for what STEM is doing for STEM’s ability to use better algorithms to increase AI’s efficiency, increase its speed and increase all of that without having to take up massive amounts of power, massive amounts of water and damage the planet in the process.

Bernie Alen

Very good. We are taking that water and that power and the planet from you. That’s the key point, right? Not mean we are all been around, but that’s the thing. We are taking, this is very important, right? We are taking… We are… By using this expensive infrastructure and this infrastructure that creates other high costs, like I need more power and I will generate more heat and therefore I need to be cooled down and the cooling needs more power and we need all power plants and everything will break down because none of these are very reliable systems. We need to be very careful about what we are doing to the planet in doing this so fast and believing that this is the only method that is out there.

So it’s a very high responsibility and a high burden on everyone to understand these other methods that exist. These are good mathematics so that the software can reduce the hardware requirements. That’s the sustainable method that’s out there. That’s the responsible method that’s out there. Okay. Let’s go to the next question. So it’s about process. So maybe we’ll start. I’m going to start with you there, Abby, from Tata, right? So once we know what to do, how do you take an organization through that change of going from manual processes to automation and automation of decision making, right? Which is what autonomous nature comes in and artificial intelligence comes in. And we got to address what it means for people who were so scared about job loss and everything else, right?

So what is talk about the process?

Abhideep Rastogi

So in our organization, what we do is specifically we have follow multiple stages. So if we talk about any use cases coming to us, anyone is asking that we wanted to perform certain tasks through an AI. It’s a very broad term, right? So we start with the stage zero, like what’s your aim of using AI? Is it cost reduction? Is it revenue? Or is it something that you want for customer experience? Once we finalize that, then we come into a stage one where your AI mapping to an opportunity that you have been handling that. OK, I’m interested in revenue generation, so it will be attaching to a finance department and how finance application will be useful for that.

So that’s where all stages will come into picture. Once you finalize the stage one, the next stage is what about your data? Data. That is the critical part of our journey and the transformation that where is your data is. Is your data have a quality? Is the data quality existing data lineage? And what are the sources of the data? Is it legacy data? Is it something which is cleaned or is need to be transformed into a clean data which we are looking forward into that? So that will be a big picture where the data part comes into. once you have the data all your alignment is done then next stage comes into as a period of what’s your architecture strategy under that it’s a big umbrella like first we have to under architecture you have to finalize what’s your deployment strategy are you looking for a GPU are you looking for a CPU and then what type of deployment are you planning is it on premises if it is a hyperscaler and then once you finalize the deployment then you comes into model are you looking for SLM are you looking for LLM or what other things need to be done once your stage is done then where are you going to host the model into once your architecture is finalized then your computer also will come into picture what’s your computer strategy you are looking for are you going to run on virtual CPUs or is it something that you can run in your local system also So, depend use case to use cases, right?

Once you have done that, what we prefer is to have a pilot execution where we will get to know what’s my accuracy is, what’s my ROI can be estimated and using this particular use case, how I’m going to achieve a particular target. Once this is done, your governance into coming to picture the next stage where you will be having some guardrails, and what’s your policies, if there’s any GDPR compliances are there, or if there is anything like maybe a HIPAA where your healthcare is concerned, right? Once your governance is finalized, then you are going to finalize into a platformization from a POC to a productionized to enterprise level deployment where you will be having all your sorted.

So, you have all the details which you are performing going to do. and you will be ready to go live with the AI transformation for that. So these are the stages which we usually follow. But next stage is what we internally do follow is for your employees, how you are going to learn what we did. So that is more important because in future, this will keep on coming up as a new use case or something. So you have a background so that you have a better alignment to that. So this is how we usually follow that about the transformation in our organization.

Bernie Alen

Very good. I’m going to take that and I’m going to segue into what I wanted to ask you, Ayush. He talked about governance, right? And I don’t think we have completely cracked the code. We don’t have a code on that, right? Because one of the questions we wanted to ask you was… given what you’re doing as general thinking there is. At one point, AI was synonymous to chat GPT. Outside of our technicals, if you talk to a doctor or a lawyer, they say, oh, I’m using AI. What are you using? I’m using chat GPT, right? And especially those two professions I mentioned, we’re very concerned about governance, right? So if you can talk more about that aspect of, because when you take the models to the end user, why is it not all just chat GPT?

And is it governable if you have these big, large, open source models and whatever you’re building on top of this, at the end of the day, that intelligence that’s been created, is it governable?

Ayush Gupta

So chat GPT definitely has been very instrumental in democratizing AI and has become a symbol of what AI means in the new world. So I’m going to credits to them for that. But definitely for an enterprise, a chat GP does not solve majority of the problems. It could be good for some lazy tasks like email writing or some personal plannings etc. But in an enterprise when it is about taking real decisions or even when doing some actions like I’m in the customer success team and I want to create a presentation for my customer around their usage in the last month and the issues that they had and how much time we took to kind of solve them.

This is something that cannot be done on chat GPT. For two reasons one, it does not know your enterprise data. You cannot connect all your know -hows to systems like open AI because somewhere or the other like open AI, entropics they are all tracking what kind of activity is happening on top of their APIs and then planning what would be the next expansion as an application. We’ve seen that with cursor use case like coding use case transitioning into a codex from open AI and a cloud code from entropics. So what is that you never connect your enterprise data because of the compliances, privacies and tying back to the data governance aspect of it. Second is the context.

Now what separates a steel company in US, Texas versus a steel company in another region in US or any one company from other even in the same vertical is the context is how is the culture of doing business, what are the KPIs, how are the processes set up, what actions do they take to actually doing an RCA, what are the decision making activities. That is basically the core of the business. That core of the business is not known to systems like chat GPT or clouds for two reasons again. One, they don’t know that process. The data is not explained. They don’t know how to do it. They don’t know how to do it. They don’t know how to do it.

They don’t know how to do it. Second, they are very general APIs, stateless APIs that will never be able to understand those nuances without learning. So those are the things that, you know, become the reality of enterprises and those are the things that, you know, chat GPT’s are not solving the real enterprise problem because of the context and, you know, the understanding of the business itself.

Bernie Alen

Very good. So, leading from that, Kevin, what I would ask you is what he said was, you know, a large enterprise context needs to be understood by open source models and there’s a responsible way to do that, right? You cannot just release all enterprise information to the public. But he also said that we need to have things like root cause analysis that needs to be done, which leads to deterministic AI, right? So if you talk about deterministic AI and also talk about the sovereignty aspect that he talked about. Which is that we need to create. We may be using public domain models where it makes sense, but we need to do it in such a way that the data is completely sovereign.

Go ahead. Talk about it.

Kevin Zane

See, deterministic AI is a solution to a very specific problem with most modern large language models, which is that they’re quintessentially probabilistic. You can give a chat GPT a prompt twice and you will get a different result. Chat GPT also has the capability to just make stuff up. And it is not bound to fact. It is not bound to a stringent set of rules. And the issue with that is that it’s great. It’s great when you want to generate a picture of a cat on the Eiffel Tower or write a Shakespearean ballad. But if you need to apply it in production. Production content. then hallucinations and false data is not something that you can afford to have in those kinds of situations, say cyber security or the medical field and that’s the very specific problem that we use deterministic AI to solve, right.

It’s at its core it’s an architectural response to this problem, we don’t eliminate machine learning entirely, we just bind it within a very set system and a set of rules, right. Objective isn’t like open ended generation but controlled and audible execution. So generally I would say there’s a few principles, very core principles this sort of approach, right. Your system has to be predictable as in your responses must give the same output for the same input. Right. Because that directly leads to auditability. Which is a very difficult thing to do. Maintaining intelligence.

Bernie Alen

once before. We are all playing around with creating intelligence, but truly it’s been done once before. Whatever faith you all believe in, it’s been done once before. And what were we all told? You know, you have your free will to go you know, so it’s, once you created intelligence, putting it in a box is a very, very difficult thing to do. Right? So, but then if you cannot put it in a box, how can you have a governance function? At some point it’s going to say something that’s going to embarrass your customer how can you have a governance function? Some thoughts?

Abhideep Rastogi

So there are a couple of rules we need to apply. That’s what I can think of at this point, like there are a couple of rules in terms of GDPR, DPDPI is coming into picture for India specifically and that’s where we follow those rules, that if that is compatible, we apply those. If not, then we may have to think about from the policy … on the other side of the company, right? If the word will be applicable or not. There are a couple of scenarios where your PII data is, to be very frank, it doesn’t matter in India much. But if you’re part of in US or somewhere else, it does matter. So we have to take care of those scenarios when we are implementing.

So at our organization, we have to make sure that we are following all the compatible policies, making sure all the guardrails are in place. So that’s where it’s my understanding.

Bernie Alen

Kenny, any thoughts on governance? I know that you deal with sensor data, which comes from measured things, not made up stuff for the most part, unless the sensor itself is showing some biology, right? When the sensor misbehaves, what do we call it? We call it sensor biology. You see how we blame the human race for that, but anyway. So from that point of view of governance, you live in a, I think, less complex space than people who are making user content, right? But what are your thoughts on governance?

Kenny Gross

One of the biggest challenges for governance is for applications where there is human -in -the -loop supervisory control of complex processes and systems. And this challenge, and it’s turned into this year the biggest challenge for defense AI, called situational awareness. With situational awareness, you can have a highly trained human operating a ship or an airplane, and if there are false alarms in the process. And that’s a problem. We talked about with the chat GPT, the hallucinations. And so forth. In physical systems, it’s false alarms on sensors. And I keep going back to the false alarm rate because the number of sensors, if you go from six sensors to 600 sensors, the probability of false alarms multiplies up with that to 50 ,000 sensors.

And so you have a pilot of an airplane has been highly trained for every situation. And when they test the pilots to give them their pilot’s license in the big simulators, they throw in a second problem when the pilot’s dealing with the first problem. The challenge from false alarms is you can have the most highly trained human. And if red lights are going off at different places from false alarms, the human gets to the point of cognitive overload. stupid mistakes and this is long before any hallucinations out of AI and just one example of that that I’m not talking out of school and giving away secret information the US Navy in the last five years has had three spectacular accidents in broad daylight with the latest instrumentation on on ships where they would run into a big oil barge or a fizzing fishing vessel hundred million dollar accidents and some of them are resulting in loss of lives well the human bridge watchers they’re called they’re in a sophisticated control room that if you imagine the cockpit of a 777 multiply that by a hundred you have highly trained humans watching all these signals and and if too many things are happening and if too many false alarms are happening the human gets mixed up, gets to cognitive overload.

We’ve published a half a dozen papers in the international cognitive science conferences around the world and demonstrated how MSET is able to eliminate that process for monitoring complex processes where a human has to make decisions. And the one technical point I’ll make, and this is in a lot of our journal articles, MSET has the lowest mathematically possible false alarm and missed alarm probability for

Bernie Alen

So Anshu, so we’ve collected all the requirements, right, in this conversation. Now we’re going to give them all to you to actually solve them, right? So because we’ve said that there’s a step -by -step process to doing all of this stuff, that a sensor explosion, right, that is sovereignty and RCA type requirements and there is user content and data explosion, all of this stuff. Finally, when we map it all to where is the compute to go do all this, because a lot of these algorithms are complex algorithms, right? And where is the compute to go do all this? So current methods are not taking us there, right?

Anshumali Shrivastava

So I’ll just add one thing here. Look, I mean, if you look at the progression of AI, everything is still one of the most powerful method in the humankind. It’s trial and error. Right? How do I know that prompt engineering works? I mean, if anybody has worked on prompt engineering, you keep trying and at some point of time you suddenly see it solves 80 % of the problem. That’s a good prompt and then you hill climb from there. The whole AI is about right now we are dealing with a new entity, a new species. We are trying to co -live with them and we don’t understand them. it’s not very different it’s just like my brain right sometimes it works maybe on tuesday it doesn’t work because of whatever my schedule is but i have learned over a period of time to live with it i think we are asking some very important question about governance guardrails all of that we will i think solve a lot of them with trial and error but the most important thing is trial and error should be regretless if to do a million tries i’m burning like hundreds of millions of dollars i would be careful so i will still say the biggest hurdle in the advancement is the ease at which i can trial and error and experiment with them and the ease is directly proportional to how much energy we are burning how much money we are paying imagine if compute was free imagine i give you the best model and i give you as many queries as you want and now imagine the hardest problem you are facing governance, accuracy I am pretty sure if you sit down and hill climb make 10 agents, let them talk with each other cloud bot, figure out some strategies you go on a dinner, maybe sleep overnight and these guys keep talking, all of them the most expensive model running at the highest possible latency I think you will make remarkable progress but you won’t be allowed to do that and that is why I will come back again and this is why this panel to me is very important because everything at the end of the day boils down to efficiency it’s like raising the tide because it raises all these boats all interesting problems will be solved if you are allowed enough trial and error that’s what my belief is

Bernie Alen

and that’s the thing the title of the program the title of the panel is Constrain the World right We can’t just all mint money. I’ve tried that. It doesn’t work. You know? So, and it’s a constrained world, right? So, how do we solve this problem? This is the largest conference probably ever. This is not a conference. This is AI Olympics. Okay? Largest conference ever. People are talking about like 700 ,000 people. This is the kind of scale we need to solve. Right? Think about it in the AI space. Every day in an AI space, it’s going to be this busy and this heavy and this crowded and this much of data, etc., etc. We can’t be throwing expensive infrastructure all the time to solve the problem.

We got to get better. We got to understand all of these other methods exist and implement those methods and have sustainable. Sustainable AI. Right? So, questions from the panel? Everybody? there’s enough time for all of you to ask at least one question. How about that? There’s a lot of time, so ask a question. What are some of the some, or you just have an opinion, just have an input. That is fine. Go for it. Who wants to go first?

Participant

There is this trend of, you know, AI will solve everything. It’s coming into the picture. And you talked about hallucination, and I see a lot of the engineering meaning, whether it is automotive, ship, aircraft, naval, it’s not always, solution is not always probabilistic. You know, it is also binary. You know, sensors give zero or one, so you need to decide. So, applying this in the real… lot of it in the real engineering world wherein we have to be deterministic to be safe you know you said MSAT could solve all this problem but if you could demystify MSAT for me that would be you know great

Kenny Gross

oh yes the best way to demystify MSAT in the way that it works is the conventional approach for monitoring signals from an asset let’s say from an automobile or a locomotive is to monitor put high low limits on each variable if the engine gets too hot a red light comes on the dashboard if the fan gets a bad bearing in it and it doesn’t go fast enough that will cause a problem and The coolant can get too hot. Pressures get too high. RPMs get too low. This has been the conventional approach for decades, high -low limits on thresholds. The problem that will never go away with putting high -low limits on individual signals, it’s called univariate monitoring, is when you’re monitoring noisy physics processes, if you want to get an earlier warning about a small developing problem, you reduce the thresholds.

But then spurious data values will trip the thresholds, and you’re shutting down a locomotive in the middle of Kansas. It’s got a bunch of cattle on the back, and they send the repair people. Oh, there wasn’t anything wrong with it. It’s a false alarm. And so the industries and manufacturing industries… It’s very expensive to shut down a manufacturing industry from the assets. But, oh, sorry, it was just a false alarm. And people who take their car in on a Saturday because of the red light, oh, there wasn’t really anything wrong. That’s the good news. You should be happy. It was just a false alarm. So to avoid the false alarms, they raise the thresholds. When now the system can be severely degraded before you get any alarm, and it’s in no way predictive.

So let me say. High -low thresholds are reactive. And so MSAT works fundamentally differently. It learns the patterns of correlation between and among all the signals. Some signals go up and down in unison. Some go up when others go down. It learns those patterns. And it detects an anomaly in the pattern days and often weeks before you’ll ever get near a threshold. So that’s the fundamental difference. So in. And.

Bernie Alen

do you play music? What do you play? Can you hear me? Okay. Do you play music? Huh? And you play chords? So it’s simple, right? When you’re playing chords and let’s say if you’re like me and you do a bad job at it, any untrained musician can even tell that I’m doing a bad job at it. Why? Those things need to go together. Independent notes? Maybe not. You cannot figure it out. But if you’re playing chords, anybody can say that guy sucks, right? So, same way. Understand, looking at the variations of a single variable can only take you down. But the multivariate part of MSAT where it looks at a joint number of sensors in one way, you can figure things out that are starting to go bad.

Misery loves company. Have you heard that? Similarly, anomalies don’t like to be alone. There’s always, they’re hiding amongst other anomalies. right?

Participant

Okay. Yeah. So I came in by accident but was really interested to hear what’s being discussed, especially MSET and the power of reducing the kind of compute and also translating into the CPU. It really was music to my ear. I have a question extended. What happens to the current ecosystem where plug and play and interoperability across the entire data engineering and RAG and MCPs and all is there? Is it possible to plug and play this thing? What I understand MSET is at a foundational and fundamental layer. So how does it merges with the current set of LLMs and services?

Bernie Alen

So we have to look at the problems we are trying to solve. Okay. And how do we build the correct architecture for that? The quick answer to that question is absolutely. I think we have… through sensor augmentation, we are augmenting a big field with sensors and we’ll certainly bring that and we’ll run that into techniques like the multivariate technique to come up with anomaly detection and predictive maintenance and etc. But after that, if you want to have a control system that is going and deploying the decision making, there will be other MCP based solutions that we develop. So it does all integrate. That’s the reason we have to closely look at the problem and make sure that for not all problems we are starting with and downloaded large language model.

Participant

I just have a follow up. So are there any open ecosystem where we can go and see and plug it into our current infrastructure on GenII services?

Bernie Alen

Yes, because remember, STEM practice company is an Oracle Corporation partner. So there is a lot that we do in the open source community and open integration. We can certainly spend time with you to educate you on all of that stuff. Any other questions? Hello everyone. So what is the most critical risk that policymakers, business, businesses and users are currently understanding about AI? Avi, you want to take that and then Anshu, you can go.

Abhideep Rastogi

So it depends on the use case to use case, first of all, and at this point, country to country. Like if you think about ESG. You act is one of the first act has been released across the world. Similarly for data point of view DPDPI is coming into India so we have been following up all the policies which is being implemented and we are also thinking ahead of the time that okay this AI act will be soon coming into picture in India as well as in other countries also what are the things we need to make sure that it’s being following up properly that’s what we follow as a process

Anshumali Shrivastava

I mean if I am correct the question is what is the misunderstanding that policy makers make about AI sorry

Participant

No sir it’s not a misunderstanding for the policy maker actually DPDPI act is in 2023 also but after some long time it’s not enforcement and implemented on the current situation because some IT laws, I know this is existence in our country, but they are not sufficient for the upcoming and in future and presently cyber related crimes. And totally the AI loopholes because we can’t generate the accuracy of laws with only with the IT laws especially in old IT laws. DPDP, okay, it’s enacted but what is the enforcement date and when they come?

Abhideep Rastogi

So the process basically in my understanding that it need to be forced by the government as an industry and need to come together government and industries as well as all the private entities need to come together. To make sure that is being forced upon. like if you think about I will give you an example very simple example right so if you think about iPhone chargers right it have a separate cable for lightning cable what we call but because of the EU and using by US policies maker they have mandate that it need to be C type charger right so these are the forces coming from the higher side so it need to be followed by that process but it does matter that when an organization start implementing those in a first way when government is releasing something definitely it can be followed up

Participant

hello good afternoon to all of you sir I am a master’s student in mathematics and I want to research in mathematics so as I seen there are advancement in AI so math is also integrated to machine learning so as I work on a project like the cancer detection technique 70 % are used for AI like neural network something like that so it’s research is also relevant in which direction are going on so research is also worth it in mathematics or something like that

Anshumali Shrivastava

by the way I am a math major so I think understanding math even though AI can solve math our understanding of math is very important to some extent understand AI right the closest we have come to understand AI is with formal reasoning so math is always a good background so we are doing research in fundamental understanding of what are the capabilities of LLM and reasoning about LLM and reasoning about it with your formal background is a very good research

Participant

greetings to all the panelists here I apologize I wasn’t here before so I couldn’t hear the conversation but as I can see the questions here I have one question that as far it relates with hallucinations as a legal background person I get often I just get the citations wrong and the case laws often get wrong so how far we can rely on the AI currently I know that the hallucination will evolve and the problem will be resolved eventually but at the current timeline how much can we rely on AI system and if it possible that in future that AI not like hallucinate it ever be a hallucinate free AI forever I hope I can make my question understood

Bernie Alen

okay yeah we go there, I just wanted Kevin to get that slide up. So we just, this is healthcare use case that we had the fortune of working with Tata on and we released it a few months ago and here we have 100 % accuracy. It’s not future. It’s now. We just do it differently. Right? And non -hallucinating methods are completely possible. With that, I’m going to let Ayush and Anshu address that topic as subject experts, but it’s not future. Demand for it first, because you are in a profession where you come up with some nonsense, the judge is going to throw you out of the room. Right? So you don’t have that luxury. Or in a doctor, he can end up much worse.

Right? So demand that first, but the solutions are here today. Right.

Ayush Gupta

So thanks for the question, first of all. You know, to err is human. To err more is AI. so errors can always be there now what are the scopes of errors and how to reduce them so one you should have a proper understanding of things like the system should know about your context all things there then the thinking process should be auditable what are the sub steps that have been taken that should be auditable so as a user as a responsible user I can always see what are the reasons it got to that answer maybe it made a mistake in one of his thinking processes then accuracy like it’s very difficult to have a probabilistic system be 100 % accurate but it can still be 100 % reliable so maybe it is 95 % accurate but the 5 % times it is wrong we are able to tell 100 % of the times that this is probably wrong you need to double check or you need to have the expert involved in auditing this answer so 100 % reliability is definitely achievable we just need the right processes and thinkings and validations in place to make sure we can really trust the answer because it is really critical to take actions on

Bernie Alen

So, Anshu, if you can address some of the fundamentals about why these hallucinations happen and why domain -specific training avoids that.

Anshumali Shrivastava

So, let’s think about hallucination. So, prior system were non -hallucinating system and they were like search. By the way, humans hallucinate. If I ask two people to tell about the exact same incident, making them sit in different room, they will have different explanations. Right? So, human mind is fundamentally a hallucinating mind. In fact, LLMs, when they became LLMs is because we focused on prompt completion. And prompt completion comes from psychology, where psychology is our mind has a tendency to fill. And that is how you come with prompt completion and go beyond search. So, search is non -hallucinating and LLMs has to be hallucinating because it has to be intelligent and smart. So, again. Again, like this goes back to what Bernie was saying, biology.

Right? So, if you are like humans, you are like humans. And that also becomes to the answer, how do we increase the reliability on humans? Well, you train them, right? And you rely not just on one, but a multitude committee of experts. And then you do debates and discussions. You have multiple LLMs that debate with each other, right? These are the standard way. In fact, you can also mathematically, right? We have a student who is mathematics. You can mathematically show that if I have a way to reduce the probability of something by delta, then I can run that process in a cycle and keep reducing the probability of hallucination and reach near perfect hallucination free stuff.

But again, coming back to it, you have to do a lot of LLMs. That’s a lot of cost. Barrier is again the cost. Sorry.

Bernie Alen

Wonderful. Any other question? That side of the room. Okay. Any side of the room?

Participant

Hi everyone I am working in IT company and I should be loud hello everyone am I audible now clear and loud that’s the only tone I have I don’t know how to be loud ok is it fine now yeah I can’t be more louder ok so my question is yes AI and any technology you know as and when we grow it helps majority right AI is solving a problem and it will in future is going to solve a lot of problems giving us industrial solutions speeding up our you know software solutions that we are currently working on and I think that’s the main even helping us in a variety of the areas. My question is, for the students who are in a school, right, we do have chat, GPT, HMNI, all the AI tools there.

And so it’s very easy for the students who are in a school. You know, they can do their assignments in a minute or in a few seconds. So how is it helping the students? Do we have any, I don’t know whether it’s a correct question or not, but are there any steps taken by the government or taken by the, you know, great leaders of our country all over the world? How the students’ mindset, you know, we can, are there any, you know, obligations if we are applying to our students to not use such a tool? Because it’s free, of course, and it’s available, right, over the Internet. They can do their assignments in a minute.

So it’s, I don’t know. I think as for me, it’s basically for the, you know, college students, for the industry. For the employees, it’s helping us. But how is it helping the students? Because there has been no academic changes done by, I don’t know whether the school are doing any curriculum changes in their syllabus or not. So, yeah, any thoughts on that?

Bernie Alen

So, did you guys get the question? Because it’s a profound and important and deep question. Are we screwing up the children is what she’s asking, right? By allowing them to quickly come up with anything. So, love to hear your take. Why don’t you go first?

Ayush Gupta

tasks from AI, otherwise you know, the same kind of journey we’ve had with calculators. Everyone knew how to multiply, divide, do many numbers until they started using calculator and now even for simple additions you go to the calculator. So one, it’s on personally us how much we start delegating to AI and lose touch of it. Then on second, all these educators, the pedagogies that form around the use of AI for education, the careers that start forming into it, they will themselves metamorphosize into what AI means in education space.

Anshumali Shrivastava

I mean this is a question that every university is asking and I think it’s a, as you said a profound question and I think the partial answer has already been said by Ayush, right? There are certain I mean skill set, right? If I want you to know addition, subtraction, you should not use calculators. But once you have gotten a basic feeling of that you the problem is not about using calculator the problem is what you can do with that calculator right so problem solving never goes away right you see what I am saying so imagine AI makes everybody 10x better then 10x better is the average and we will now aspire for something more so whatever is average is what AI can do you see what I am saying and going beyond that will require ingenuity creativity so I agree that education system need to transform and we are also learning as we go as to how we transform it but the goal is will always be can we solve problems that we cannot solve otherwise and that will require us to always think out of the box and so that is that will come I think it’s still an early stage but I think a lot of people are thinking about it talking about it and as I said it will start getting th

Bernie Alen

at’s a very profound question I have an 8 year old so I worry about that every day right but I hope I’m doing the right thing by letting him play with whatever AI he wants to play with but we are 10 minutes over time I’m told so I need to apologize for the next session but go ahead you had a question will you make that the last question good evening to everyone my que

Participant

stion is related to AGI because we have we are using AI right now so thinking of the next step I was thinking that is there any relation between AGI and quantum computer or is it like that AGI will be only possible after quantum computer or with the current processors

Bernie Alen

that’s a wonderful question but it’s a 2 hour topic question and I don’t know why you waited the last minute to ask that question we are launching our first quantum enablement center as a stem practice company in 2 sites in Chattanooga Tennessee and we hope that we can start and we hope that we can start launching quantum computers launching quantum computers and we hope that we can start and we hope that we can start and we hope that we can start in India as well, because we are thinking through specific problems that we can solve with quantum computers today and get it over there, but that is such a big topic, and we can’t, I think thinking about reducing competition needs, thinking about reducing cost is the way we are doing, but quantum computers don’t use a fraction of the energy compared to a similar GPU simulated machine that tries to simulate quantum processing, right?

So yes, there’s a lot of energy advantage to going that route, but that’s a very deep and very profound topic. Thanks for bringing that up, because we didn’t bring quantum up, but that’s a broad topic. With that, we have to close, because apparently we are stealing time from the next panel now, which is a difficult thing to do. I see a hand, I’ll just talk to you one -on -one outside, but thank you everybody for coming, I thank the panel. I think you guys got it and so by the way we are in we have a stall a booth whatever we call it it’s hall 6 stall 100 easy to remember 6 100 please come there and get our material and then get connected and we can keep the conversation going thank you very much thank you

Related ResourcesKnowledge base sources related to the discussion topics (33)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“GPU‑based infrastructure creates expensive, high‑heat, high‑failure‑rate systems with limited supply”

The knowledge base explicitly describes GPU-based infrastructure as expensive, generating high heat, having a high failure rate and limited supply [S12].

Additional Contexthigh

“The wasteful practice of hoarding GPUs is linked to a broader environmental threat, as high‑heat, high‑failure‑rate GPU clusters demand excessive power, water and cooling, hurting the planet”

Additional sources highlight the environmental impact of large AI compute: extensive electricity use and cooling requirements [S14] and the heavy water and energy consumption of data centres [S95].

Confirmedmedium

“Both speakers critique current GPU‑centric approaches, with Bernie advocating moving away from this model”

The knowledge base notes that both speakers criticize GPU-centric AI development and that Bernie Alen advocates moving away from it [S1].

External Sources (104)
S1
AI Without the Cost Rethinking Intelligence for a Constrained World — – Ayush Gupta- Abhideep Rastogi – Ayush Gupta- Bernie Alen
S2
AI Without the Cost Rethinking Intelligence for a Constrained World — He has a patent for every day of the year. So his patent count is approaching 365. So that’s Kenny Gross, a master machi…
S3
https://dig.watch/event/india-ai-impact-summit-2026/ai-without-the-cost-rethinking-intelligence-for-a-constrained-world — He has a patent for every day of the year. So his patent count is approaching 365. So that’s Kenny Gross, a master machi…
S5
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S6
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S7
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — – **Participant**: Role/Title not specified, Area of expertise not specified
S8
AI Without the Cost Rethinking Intelligence for a Constrained World — -Bernie Alen: Led advanced technologies market development for Oracle; Founder/Leader of STEM Practice Company (Oracle C…
S9
S11
https://dig.watch/event/india-ai-impact-summit-2026/partnering-on-american-ai-exports-powering-the-future-india-ai-impact-summit-2026 — I totally agree. I think one thing that, you know, when it comes to the stack, is there are multiple parts of it. There …
S12
WS #208 Democratising Access to AI with Open Source LLMs — Despite the optimism, significant challenges were acknowledged. Daniele Turra pointed out that substantial computing res…
S13
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — And as Nathan mentioned, that some of our states can have a power usage efficiency or effectiveness, which can be extrao…
S14
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S15
Quantum computing — Quantum computing promises to significantly enhance the capabilities and accuracy of AI,unlockingthe next generation of …
S16
Record investment in quantum computing driven by AI growth — Funding for quantum computinghas reachedunprecedented levels, with startups in the sector securing around $1.5 billion i…
S17
Quantum’s Black Swan — This breakthrough has been achieved by companies like Alphabet, Amazon, NVIDIA, and AMD. A future hybrid computing infra…
S18
The Foundation of AI Democratizing Compute Data Infrastructure — Okay, so first of all, I think the computing requirements for training modern AI systems is temporary. It’s temporary be…
S19
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — Piyush Nangru articulated the transformation in educational terms, stating that “coding is no longer a skill. It’s table…
S20
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The focus should be on developing broad talent with critical thinking and problem-solving skills rather than narrow tech…
S21
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Future skills requirements emphasise working with technology rather than coding, with increasing importance placed on ps…
S22
The fading of human agency in automated systems — In practice, however, being “in the loop” frequently means supervising outputs under conditions that make meaningful jud…
S24
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — Antonia Gawel:I mean, I think very much a focus on decarbonization of the power sector is a critical input and a signifi…
S25
Discussion Report: Sovereign AI in Defence and National Security — Faisal argues that regulatory differences, such as the EU’s GDPR privacy regulations, can inadvertently strengthen defen…
S26
AI governance struggles to match rapid adoption — Accelerating AI adoptionis exposingclear weaknesses in corporate AI governance. Research shows that while most organisat…
S27
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — The digital transformation increasingly contributes to greenhouse gas (GHG) emissions. For example, generative artificia…
S28
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — A single governance structure would probably get captured, which is the problem of regulatory capture and market concent…
S29
Comprehensive Discussion Report: AI’s Existential Challenge to Human Identity and Society — Sociocultural | Human rights Tracey expresses concern that over-reliance on AI for decision-making and problem-solving …
S30
IGF 2024 Global Youth Summit — Umut Pajaro Velasquez: Okay. everyone on good day or good evening wherever you are. When it comes to decisions on how …
S31
AI for Social Empowerment_ Driving Change and Inclusion — The educational implications are immediate and severe. Teachers and students are increasingly relying on AI to perform c…
S32
How to make AI governance fit for purpose? — Shan emphasized international collaboration through the ITU and global standards development, expressing concern about p…
S33
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Digital inclusion and transformation are crucial for global development. However, environmental concerns must be conside…
S34
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — AI’s energy demands. Threaten to outpace green energy progress. Model providers face a stark reality. AI’s energy needs …
S35
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S36
Global Internet Governance Academic Network Annual Symposium | Part 3 | IGF 2023 Day 0 Event #112 — Adio Adet Dinika:All right. Wonderful. Thanks for that. So, quickly moving on to the Crimean postcolonial critique, basi…
S37
NRIs MAIN SESSION: DATA GOVERNANCE — Data sovereignty is under respective laws and protection agreements Foundational infrastructures such as broadband, dig…
S38
Data governance — The fact that free flow of data across national and corporate borders facilitates economic development and contributes t…
S39
AI Without the Cost Rethinking Intelligence for a Constrained World — LLMs has to be hallucinating because it has to be intelligent and smart Real-world demonstration showed 100% accuracy w…
S40
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S41
How nonprofits are using AI-based innovations to scale their impact — “it saves them really a lot of time because before such a conversation, they would have to look at the last four reports…
S42
Mauritius Artificial Intelligence Strategy — For example, in heavy machineries, the shafts and ball bearings of heavy rollers are constantly under pressure and to pr…
S43
https://dig.watch/event/india-ai-impact-summit-2026/ai-without-the-cost-rethinking-intelligence-for-a-constrained-world — So the energy savings come from the three orders of magnitude lower compute costs. We’ve done four presentations with NV…
S44
AI in schools: The reality is messier than the solutions — But adaptation didn’t happen automatically or without effort. It required educators to rethink curricula, develop new pe…
S45
Driving Enterprise Impact Through Scalable AI Adoption — This framework provided a strategic roadmap for human adaptation, suggesting that individuals and institutions should fo…
S46
Generative AI: Steam Engine of the Fourth Industrial Revolution? — It is evident that there is an urgent need for partnerships with governments to modify basic education in order to meet …
S47
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — This comment identifies a systemic problem where educational institutions cannot keep pace with technological change due…
S48
Microsoft expands software security lifecycle for AI-driven platforms — AI is widening the cyber risk landscape and forcing security teams to rethink established safeguards. Microsoft has upda…
S49
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — So there is a lot of apprehension in people’s mind that it is a probabilistic model, non -deterministic model, there may…
S50
Part 8: ‘Maths doesn’t hallucinate: Harnessing AI for governance and diplomacy’ — Take the following video as an example: The word ‘hallucination’ traditionally refers to a sensory human experience tha…
S51
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Economic | Development Four-channel framework showing automation vs. complementation paths, with emphasis on right-hand…
S52
Policy Meets Tech: Quantum computing — The ‘Policy Meets Tech’ series is organised by Diplo, with the support of the US Permanent Mission to the UN in Geneva, …
S53
Quantum Technologies: Navigating the Path from Promise to Practice — First commercial quantum applications will focus on simple materials/molecules, finance optimization, and specific algor…
S54
AI as critical infrastructure for continuity in public services — Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational an…
S55
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S56
Pre 12: Resilience of IoT Ecosystems: Preparing for the Future — As AI becomes integrated into IoT systems, proper governance frameworks are essential to ensure ethical and trustworthy …
S57
WS #208 Democratising Access to AI with Open Source LLMs — The speaker mentions the need for GPU infrastructure and the high costs associated with it.
S58
AI Without the Cost Rethinking Intelligence for a Constrained World — This comment reframes the entire AI infrastructure discussion by suggesting the industry has abandoned fundamental engin…
S59
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — Currently, it is still a significant amount that is being incurred upon. So the cost optimization, a larger chunk of cos…
S60
https://dig.watch/event/india-ai-impact-summit-2026/ai-without-the-cost-rethinking-intelligence-for-a-constrained-world — And actually, it’s not the CPU. It’s the algorithm. And the reason is context windows scale quadratically in attention. …
S61
Book launch: What changes and remains the same in 20 years in the life of Kurbalija’s book on internet governance? — 3. **Processing Architecture Shift**: The transition from CPU-based to GPU-based computing, fundamentally altering how c…
S62
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Infrastructure | Development | Economic Ioanna Ntinou described a practical technique where a large, accurate model tea…
S63
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — Antonia Gawel:I mean, I think very much a focus on decarbonization of the power sector is a critical input and a signifi…
S64
Data governance — Data management improvements are not without dangers for data governance. While AI algorithmscan identify and mitigate b…
S65
AI governance struggles to match rapid adoption — Accelerating AI adoptionis exposingclear weaknesses in corporate AI governance. Research shows that while most organisat…
S66
AI Meets Cybersecurity Trust Governance &amp; Global Security — Not because that software is insecure, but because the security of software is often about how software is designed, how…
S67
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — The digital transformation increasingly contributes to greenhouse gas (GHG) emissions. For example, generative artificia…
S68
AI for Social Empowerment_ Driving Change and Inclusion — The educational implications are immediate and severe. Teachers and students are increasingly relying on AI to perform c…
S69
Comprehensive Discussion Report: AI’s Existential Challenge to Human Identity and Society — Sociocultural | Human rights Tracey expresses concern that over-reliance on AI for decision-making and problem-solving …
S70
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — However, it is important to note that there is a potential risk associated with the use of such systems, as they may pro…
S71
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — The panel reached consensus on the need for fundamental educational reform to prepare students for an AI-integrated futu…
S72
Driving Enterprise Impact Through Scalable AI Adoption — Educational institutions need to adapt curricula to emphasize critical thinking, question-asking, and evaluation skills …
S73
Rolling out EVs: A Marathon or a Sprint? — Aasheim’s argument is supported by the statement that the planet is metaphorically “on fire,” highlighting the severity …
S74
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — Absolutely, Ankit, just trying to, this is something which I know two years back when we said that I’m putting 8000 GPUs…
S75
https://dig.watch/event/india-ai-impact-summit-2026/building-climate-resilient-systems-with-ai — It looks like the slides are not there. There’s a certain, turning on the screen. There it goes. I will say that while w…
S76
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S77
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Wai Sit Si Thou: Just to double-check whether you can see my screen and hear me well. Yes. Yes. Okay, perfect. So my sha…
S78
AI, Data Governance, and Innovation for Development — The overall tone was optimistic and solution-oriented, with speakers focusing on practical ways to overcome obstacles th…
S79
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S80
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S81
IGF Intersessional Work Session: DC — This comment reframes the entire discussion by suggesting that the solution isn’t to create new governance structures bu…
S82
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S83
Open Forum #9 Digital Technology Empowers Green and Low-carbon Development — Du Guimei: distinguished guests and friends from around the world. I’m the principal from Tsinghua University Primary Sc…
S84
Dynamic Coalition Collaborative Session — Jonathan Cave: Thank you very much, Favre, can I be heard? It asks if I want to unmute, okay, that’s fine. Okay, yes, on…
S85
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S86
Host Country Open Stage — The tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a profes…
S87
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stresse…
S88
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S89
Closing session — The speakers advocate for proactive action to actively impact people’s lives and empower individuals. They reflect on th…
S90
The Innovation Beneath AI: The US-India Partnership powering the AI Era — This comment introduces a contrarian perspective amid the general enthusiasm for massive AI infrastructure investments. …
S91
AI investment shows strong momentum beyond bubble fears — AI investmentis not showingsigns of a speculative bubble, according to theAlibaba Groupchairman. Instead, he argued at t…
S92
AI Infrastructure and Future Development: A Panel Discussion — This comment cuts through the technical complexity to identify the core economic driver – insatiable demand for intellig…
S93
Intel falls behind in AI race, Nvidia and AMD surge — Once a leader in the computer chip industry, Intel, hasfacedsignificant challenges adapting to the AI era. Seven years a…
S94
Efforts to improve energy efficiency in high-performance computing for a Sustainable Future — The demand for high-performance computing (HPC) has surged due to technological advancements like machine learning, geno…
S96
https://dig.watch/event/india-ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — You know, these are problems that affect hundreds of millions of people. Then we can build RL environments for them that…
S97
https://dig.watch/event/india-ai-impact-summit-2026/ai-meets-cybersecurity-trust-governance-global-security — which means that if things go wrong, they won’t necessarily go wrong because someone forgot to fix a bug. They’ll go wro…
S98
WSIS Action Line C2 Information and communication infrastructure — AI technologies can help lower the costs of operating networks while improving their efficiency in both urban and rural …
S99
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — In conclusion, the analysis thoroughly examines various dimensions surrounding LLMs and their implications. It explores …
S100
The Expanding Universe of Generative Models — Aidan Gomez argues that enhancing the efficiency of large language models relies on increasing compute power. He asserts…
S101
Artificial Intelligence &amp; Emerging Tech — Regarding Large Language Models (LLMs), they are viewed with some degree of scepticism due to their complex mathematical…
S102
Folding Science / DAVOS 2025 — Hassabis notes that relatively simple algorithmic concepts like backpropagation and reinforcement learning have scaled i…
S103
https://dig.watch/event/india-ai-impact-summit-2026/invest-india-fireside-chat — A lot of this role is elite Harvard and MIT guys, and I want them to build what you say. So let me challenge you a littl…
S104
https://dig.watch/event/india-ai-impact-summit-2026/artificial-general-intelligence-and-the-future-of-responsible-governance — That is how we feel it. It’s an evolving area. Let’s see how it happens. That the human portion needs to get more educa…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
B
Bernie Alen
3 arguments152 words per minute4146 words1634 seconds
Argument 1
Overreliance on GPU clusters and neglect of software optimization (Bernie Alen)
EXPLANATION
Bernie argues that AI projects are focused on acquiring as many GPUs as possible while ignoring traditional software‑level optimization techniques that could dramatically reduce infrastructure costs. He stresses that applying well‑known mathematical and software‑optimization methods would allow AI workloads to run on cheaper CPUs, edge devices, or even mobile hardware.
EVIDENCE
He notes that developers are “just running around getting as many GPUs as possible” and that “there is a software optimization step that everybody is skipping”; he points out that reducing algorithmic complexity would enable running models on CPUs, clustered CPUs, edge computers, and mobile phones [6-8][17-20][12-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources highlight the high cost, heat generation, and failure rates of GPU-centric AI infrastructure, and discuss the need for alternative, less resource-intensive approaches [S1], [S13], [S14].
MAJOR DISCUSSION POINT
Need for software optimization to cut GPU dependence
AGREED WITH
Anshumali Shrivastava, Kenny Gross
Argument 2
Current AI infrastructure’s high power, heat, and cooling demands pose a planetary risk (Bernie Alen)
EXPLANATION
Bernie warns that the massive power consumption, heat generation, and cooling requirements of large GPU clusters create a serious environmental burden and threaten planetary health. He calls for responsible methods that reduce hardware needs to avoid further damage.
EVIDENCE
He states that “we are creating problems, like we need more power generation, and we are causing harm to the planet” and later emphasizes that AI infrastructure “needs a lot of power, heat, and cooling” which could break down power plants [75-78][310-312].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The environmental impact of AI compute, including large electricity consumption and cooling requirements, is documented in discussions of power usage efficiency and Green AI [S13], [S14], and the critique of GPU-based systems [S1].
MAJOR DISCUSSION POINT
Environmental impact of AI compute
AGREED WITH
Kevin Zane, Ayush Gupta, Abhideep Rastogi
Argument 3
Quantum computing offers a potential low‑energy pathway for next‑generation AI, complementing current efforts (Bernie Alen)
EXPLANATION
Bernie mentions that his company is launching a quantum enablement center and believes quantum computers can solve AI problems with far less energy than GPU‑based simulations, presenting a sustainable future route for AI.
EVIDENCE
He says “we are launching our first quantum enablement center… quantum computers don’t use a fraction of the energy compared to a similar GPU simulated machine” [635-638].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Quantum computing is presented as a low-energy alternative for AI, with claims of energy efficiency and growing investment in the field [S15], [S16], and visions of hybrid CPU-GPU-QPU infrastructures [S17].
MAJOR DISCUSSION POINT
Quantum computing as a sustainable AI alternative
A
Anshumali Shrivastava
4 arguments183 words per minute3031 words991 seconds
Argument 1
Dynamic sparsity and new attention mathematics enable CPU‑based inference, cutting hardware needs (Anshumali Shrivastava)
EXPLANATION
Anshumali describes dynamic sparsity, where only the parameters needed for a specific input are computed, and a new attention formulation that reduces the quadratic cost of attention. These techniques allow CPUs to outperform GPUs for very large context windows, eliminating the need for expensive GPU clusters.
EVIDENCE
He defines dynamic sparsity as “pick which ones I need based on my input” and explains that the new attention math on a CPU outperforms GPU-based flash attention when the context exceeds 131 000 tokens, showing a shift from GPU to CPU dominance [102-108][154-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of algorithmic advances over hardware, such as dynamic sparsity and attention reformulations that reduce quadratic complexity, is emphasized in analyses of attention scaling and CPU-GPU trade-offs [S3], and the AI memory wall discussion [S1].
MAJOR DISCUSSION POINT
Algorithmic tricks to reduce compute and enable CPU inference
AGREED WITH
Bernie Alen, Kenny Gross
Argument 2
Model parameter growth far outpaces GPU memory and compute, and quadratic attention creates a hard scaling ceiling (Anshumali Shrivastava)
EXPLANATION
He presents evidence that the number of parameters in large language models is increasing faster than GPU memory capacity and compute capability, and that attention’s quadratic complexity prevents latency improvements even with more GPUs, creating a hard ceiling for scaling.
EVIDENCE
He references plots showing parameter count outpacing GPU memory and compute, cites the “AI memory wall” paper, and explains that attention scales quadratically, so adding linear numbers of GPUs cannot keep up [95-98][156-159].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence of parameter growth outpacing GPU memory and the quadratic attention bottleneck is provided in the AI memory wall overview and attention complexity critiques [S1], [S3].
MAJOR DISCUSSION POINT
Hardware scaling limits due to model growth and attention complexity
Argument 3
Extending context windows is essential for complex, common‑sense reasoning and agentic workflows (Anshumali Shrivastava)
EXPLANATION
Anshumali argues that larger context windows allow models to retain and reason over many intermediate steps, which is crucial for tasks requiring common sense and complex automation. He notes that current context sizes have plateaued, limiting future capabilities.
EVIDENCE
He explains the concept of a context window, gives the example of solving an Olympiad problem versus a simple addition, shows that current windows plateau around 1 million tokens and that larger windows are needed for sophisticated workflows [124-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for larger context windows to enable advanced reasoning is discussed alongside the limitations of current attention mechanisms and the benefits of new attention math for long contexts [S1], [S3].
MAJOR DISCUSSION POINT
Need for larger context windows to enable advanced reasoning
Argument 4
Emphasis should shift to problem‑solving and creativity beyond what AI can automate (Anshumali Shrivastava)
EXPLANATION
He suggests that while AI can make users ten times more productive, true progress will come from focusing on problem‑solving and creativity that remain beyond AI’s current capabilities. This shift is necessary to maintain human value in an AI‑augmented world.
EVIDENCE
He states “AI makes everybody 10x better… problem solving never goes away” and emphasizes the need for ingenuity and creativity beyond what AI can automate [629-634].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for focusing on applied intelligence, creativity, and problem-solving rather than pure AI automation align with perspectives on applied intelligence and skill development [S19], [S20], [S21].
MAJOR DISCUSSION POINT
Human creativity versus AI automation
K
Kenny Gross
4 arguments130 words per minute1616 words744 seconds
Argument 1
MSET achieves up to 2,500× reduction in compute cost for anomaly detection (Kenny Gross)
EXPLANATION
Kenny’s MSET technology can slash the compute required for anomaly‑detection use cases by roughly 2,500 times, delivering dramatic cost savings while maintaining high accuracy.
EVIDENCE
Bernie cites the reduction: “cost of running the use case was 1 over 2,500” and Kenny earlier describes MSET’s ability to detect failures early, implying massive compute savings [199-203][181-184].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The MSET technique is described as delivering real-time anomaly detection without GPUs, achieving massive compute cost reductions [S1].
MAJOR DISCUSSION POINT
Massive compute‑cost reduction via MSET
AGREED WITH
Bernie Alen
Argument 2
MSET provides massive energy savings and early failure prediction, reducing data‑center downtime (Kenny Gross)
EXPLANATION
Kenny explains that MSET lowers compute by three orders of magnitude, predicts hardware failures weeks in advance, and thus avoids costly downtime in data centers, leading to significant energy and operational savings.
EVIDENCE
He notes “energy savings come from the three orders of magnitude lower compute costs” and that MSET can detect failure mechanisms “days and often weeks in advance” preventing downtime [178-184][185-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
MSET’s low compute footprint and ability to predict failures weeks in advance are highlighted as sources of energy savings and reduced downtime [S1].
MAJOR DISCUSSION POINT
Energy efficiency and reliability through early prognostics
AGREED WITH
Bernie Alen
Argument 3
Governance challenges arise from false alarms in human‑in‑the‑loop systems, leading to cognitive overload (Kenny Gross)
EXPLANATION
Kenny points out that traditional high‑low threshold monitoring generates many false alarms, which overload trained operators and can cause mistakes. MSET’s multivariate approach dramatically reduces false alarms, easing human‑in‑the‑loop governance.
EVIDENCE
He describes how false alarms cause cognitive overload for pilots and operators, and how MSET learns correlations among signals to detect anomalies early, avoiding spurious alerts [429-440][504-508].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The problem of false alarms causing cognitive overload and the benefit of multivariate approaches to reduce them are discussed in analyses of human-in-the-loop automation and false-alarm mitigation [S22], [S1].
MAJOR DISCUSSION POINT
Reducing false alarms to improve human‑in‑the‑loop governance
Argument 4
MSET’s anomaly‑detection pipeline delivers 2,500× cost reduction and predictive maintenance capabilities (Kenny Gross)
EXPLANATION
The pipeline combines MSET’s early‑anomaly detection with predictive maintenance, delivering both huge cost reductions and the ability to service assets before failures occur.
EVIDENCE
He mentions the 2,500× cost reduction claim and explains that MSET can detect anomalies “days and often weeks before” reaching thresholds, enabling predictive maintenance [199-203][244-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The combined pipeline of anomaly detection and predictive maintenance delivering 2,500× cost reduction is documented in the MSET presentation [S1].
MAJOR DISCUSSION POINT
Predictive maintenance enabled by ultra‑low‑cost anomaly detection
A
Ayush Gupta
5 arguments166 words per minute1364 words490 seconds
Argument 1
Alternative architectures can deliver cheap inference without GPUs (Ayush Gupta)
EXPLANATION
Ayush states that by hosting their own models and training small language models (SLMs), they can run inference at a fraction of the GPU cost, making AI services affordable even at massive scale.
EVIDENCE
He points out that “the major cost driver is the GPU” and that they host their own models and train SLMs to achieve cheap inference, aiming for “1 rupee per conversation” [282-284][285-286].
MAJOR DISCUSSION POINT
Low‑cost inference through alternative model architectures
Argument 2
Enterprise data sovereignty and privacy demand strict compliance and contextual grounding (Ayush Gupta)
EXPLANATION
Ayush emphasizes that enterprise AI solutions must keep proprietary data in‑house, comply with privacy regulations, and be contextually aware of business processes, unlike generic public models such as ChatGPT.
EVIDENCE
He explains that ChatGPT lacks enterprise data, cannot be connected due to compliance and privacy concerns, and stresses the need for contextual grounding within the organization [362-382].
MAJOR DISCUSSION POINT
Data sovereignty and regulatory compliance for enterprise AI
AGREED WITH
Bernie Alen, Kevin Zane, Abhideep Rastogi
Argument 3
An agentic data‑analysis platform unifies enterprise data sources, reducing the need for multiple data warehouses (Ayush Gupta)
EXPLANATION
Ayush describes a platform that lets business users converse with a system that directly accesses diverse data sources (tables, PDFs, images) without building separate data‑warehouse layers, thereby simplifying architecture and cutting costs.
EVIDENCE
He outlines that the platform enables “static dashboards” to be replaced by conversational analytics, removing the need for bronze/silver/gold tables and allowing direct connections to native systems of record [270-274].
MAJOR DISCUSSION POINT
Unified, agentic data analytics reducing data‑warehouse complexity
Argument 4
Educational curricula must evolve to incorporate AI tools while preserving fundamental skills (Ayush Gupta)
EXPLANATION
Ayush draws a parallel with calculators, arguing that curricula should integrate AI responsibly, ensuring students retain core competencies while learning to leverage AI as a tool.
EVIDENCE
He states “to err is human, to err more is AI” and discusses the balance between using AI and maintaining basic skills, likening the situation to the adoption of calculators [625-632].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for integrating AI tools while preserving core competencies echo broader discussions on applied intelligence, critical thinking, and skill shifts in education and the workforce [S19], [S20], [S21].
MAJOR DISCUSSION POINT
Balancing AI integration with foundational skill preservation in education
AGREED WITH
Participant, Bernie Alen
Argument 5
Analogous to calculators, AI can erode basic competencies if over‑relied upon (Ayush Gupta)
EXPLANATION
He warns that excessive reliance on AI may cause students to lose basic abilities, just as calculators reduced manual arithmetic practice, highlighting the need for mindful adoption.
EVIDENCE
He repeats the calculator analogy, emphasizing that over-reliance can diminish fundamental skills [625-632].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The analogy of AI to calculators and concerns about skill erosion are reflected in the same educational and workforce skill literature [S19], [S20], [S21].
MAJOR DISCUSSION POINT
Risks of over‑reliance on AI in learning
AGREED WITH
Participant, Bernie Alen
K
Kevin Zane
3 arguments143 words per minute385 words160 seconds
Argument 1
Continued scaling without algorithmic breakthroughs threatens sustainability (Kevin Zane)
EXPLANATION
Kevin argues that simply adding more GPUs will not keep pace with the exponential growth of model sizes, leading to unsustainable energy, water, and power consumption.
EVIDENCE
He notes that “we are rapidly approaching a hard limit on how scalable GPU-based infrastructure is” and that this has a large impact on environment, water, and power [303-306].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The unsustainable trajectory of GPU scaling, its environmental impact, and the need for greener AI are highlighted in Green AI analyses and power-usage discussions [S13], [S14], [S1].
MAJOR DISCUSSION POINT
Sustainability limits of hardware‑centric scaling
AGREED WITH
Bernie Alen, Kenny Gross
Argument 2
Efficient algorithms lower power and water consumption, addressing AI’s environmental footprint (Kevin Zane)
EXPLANATION
He highlights that algorithmic improvements can reduce the power and water needed for AI workloads, mitigating the environmental footprint of AI deployments.
EVIDENCE
He links better algorithms to lower power and water usage, stating that this addresses AI’s environmental impact [306-311].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Algorithmic efficiency as a lever to reduce electricity and water usage in AI workloads is noted in Green AI and sustainability literature [S14].
MAJOR DISCUSSION POINT
Algorithmic efficiency as a path to greener AI
Argument 3
Deterministic AI is required for production use to eliminate hallucinations and ensure auditability (Kevin Zane)
EXPLANATION
Kevin defines deterministic AI as a system that produces the same output for the same input, eliminating hallucinations and enabling auditability, which is essential for high‑risk domains like cybersecurity and healthcare.
EVIDENCE
He describes deterministic AI, its need to avoid hallucinations, and its role in providing consistent, auditable results [390-404].
MAJOR DISCUSSION POINT
Need for deterministic, auditable AI in production
AGREED WITH
Bernie Alen, Ayush Gupta, Abhideep Rastogi
DISAGREED WITH
Anshumali Shrivastava, Bernie Alen
P
Participant
2 arguments138 words per minute925 words402 seconds
Argument 1
Hallucination risks in legal and medical domains necessitate reliability safeguards (Participant)
EXPLANATION
The participant raises concerns that AI hallucinations could cause serious errors in legal and medical contexts, demanding mechanisms that ensure reliability and correctness.
EVIDENCE
The participant asks about hallucination risks in legal and medical domains, and Bernie responds by citing a 100 % accuracy use case and stating that non-hallucinating methods are already available [554-556][562-566].
MAJOR DISCUSSION POINT
Ensuring reliable, non‑hallucinatory AI for high‑stakes domains
Argument 2
Concerns about AI enabling academic cheating highlight the need for policy and pedagogical responses (Participant)
EXPLANATION
The participant worries that students can use AI tools to complete assignments instantly, prompting a need for educational policies and pedagogical adjustments to address academic integrity.
EVIDENCE
She asks whether there are policies to limit AI use in schools, and Bernie acknowledges the concern, noting the profound nature of the question and the need for policy discussion [606-618][619-624].
MAJOR DISCUSSION POINT
Academic integrity and policy response to AI‑assisted cheating
A
Abhideep Rastogi
3 arguments153 words per minute1187 words462 seconds
Argument 1
Adherence to GDPR, DPDPI, and emerging AI regulations must be baked into AI projects (Abhideep Rastogi)
EXPLANATION
Abhideep stresses that AI implementations must comply with data‑privacy regulations such as GDPR and India’s DPDPI, as well as upcoming AI‑specific legislation, embedding these requirements throughout the project lifecycle.
EVIDENCE
He mentions following GDPR, DPDPI, and preparing for the AI Act, ensuring guardrails and policies are in place [413-421].
MAJOR DISCUSSION POINT
Regulatory compliance embedded in AI development
AGREED WITH
Bernie Alen, Kevin Zane, Ayush Gupta
Argument 2
The industry is moving from chatbot‑centric solutions to workflow‑automation agents (Abhideep Rastogi)
EXPLANATION
Abhideep observes a shift from early generative‑chatbot applications toward AI agents that automate end‑to‑end workflows across enterprises.
EVIDENCE
He describes the transition from “Gen-H chatbot” to “workflow automation where agent tech AI and agents are running on an executive level” [219-224].
MAJOR DISCUSSION POINT
Evolution from chatbots to workflow automation agents
Argument 3
A structured multi‑stage roadmap (aim, data, architecture, pilot, governance, production) guides AI adoption (Abhideep Rastogi)
EXPLANATION
Abhideep outlines a six‑stage process for AI projects, starting with defining the aim, assessing data quality, choosing architecture, running pilots, establishing governance, and finally scaling to production.
EVIDENCE
He enumerates stages zero through governance and production, detailing considerations for aim, data quality, architecture, deployment, pilot, and governance [324-345].
MAJOR DISCUSSION POINT
Step‑by‑step framework for enterprise AI implementation
Agreements
Agreement Points
Algorithmic and software optimisation can dramatically reduce AI infrastructure costs and enable CPU/edge inference
Speakers: Bernie Alen, Anshumali Shrivastava, Kenny Gross
Overreliance on GPU clusters and neglect of software optimization (Bernie Alen) Dynamic sparsity and new attention mathematics enable CPU‑based inference, cutting hardware needs (Anshumali Shrivastava) MSET achieves up to 2,500× reduction in compute cost for anomaly detection (Kenny Gross)
All three speakers argue that the current focus on acquiring ever more GPUs overlooks mature software-level optimisation techniques. Bernie stresses that applying known mathematical methods would let models run on CPUs, edge devices or mobiles [6-8][12-16][17-20]. Anshumali describes dynamic sparsity and a new attention formulation that let CPUs outperform GPUs for very large context windows [102-108][154-158]. Kenny points to his MSET technology delivering a 2,500-fold compute reduction, showing that algorithmic tricks can replace massive GPU clusters [178-184][199-203].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with Green AI initiatives calling for algorithmic efficiency to curb energy use, as highlighted in IGF discussions on sustainable AI and the ‘Smaller Footprint Bigger Impact’ report that stresses software optimisation for lower compute footprints [S34][S35][S43].
Current GPU‑centric AI compute has unsustainable environmental impacts and must be mitigated
Speakers: Bernie Alen, Kevin Zane, Kenny Gross
Current AI infrastructure’s high power, heat, and cooling demands pose a planetary risk (Bernie Alen) Continued scaling without algorithmic breakthroughs threatens sustainability (Kevin Zane) MSET provides massive energy savings and early failure prediction, reducing data‑center downtime (Kenny Gross)
The panel repeatedly highlights the ecological burden of large GPU farms. Bernie warns that AI’s power, heat and cooling needs are harming the planet [75-78][310-312]. Kevin notes that simply adding more GPUs will hit hard limits and waste water, power and energy [303-306]. Kenny adds that his low-compute MSET approach saves three orders of magnitude of energy while preventing costly downtime [178-184].
POLICY CONTEXT (KNOWLEDGE BASE)
The environmental concerns echo the planetary limits of AI framing and Green AI literature, which identify GPU-heavy training as a major carbon source and call for mitigation policies [S33][S34][S35][S51].
Strong governance, auditability and data‑sovereignty are essential for trustworthy AI deployment
Speakers: Bernie Alen, Kevin Zane, Ayush Gupta, Abhideep Rastogi
Current AI infrastructure’s high power, heat, and cooling demands pose a planetary risk (Bernie Alen) Deterministic AI is required for production use to eliminate hallucinations and ensure auditability (Kevin Zane) Enterprise data sovereignty and privacy demand strict compliance and contextual grounding (Ayush Gupta) Adherence to GDPR, DPDPI, and emerging AI regulations must be baked into AI projects (Abhideep Rastogi)
All four speakers stress that AI systems must be governed, auditable and respect data-privacy laws. Bernie asks how governance can be applied when AI outputs can be unpredictable [422-428]. Kevin defines deterministic AI as a way to guarantee repeatable outputs and audit trails [390-404]. Ayush argues that enterprise AI must keep data in-house and comply with regulations such as GDPR and India’s DPDPI [362-382]. Abhideep outlines a multi-stage process that embeds GDPR/DPDPI compliance and future AI-Act guardrails [413-421].
POLICY CONTEXT (KNOWLEDGE BASE)
Data-sovereignty and auditability are central to emerging data-governance frameworks and AI trust guidelines, as reflected in the Data Governance sessions and the AI-as-critical-infrastructure recommendations for secure compute and sovereignty [S37][S38][S48][S54][S55].
MSET technology delivers massive compute‑cost reductions and enables predictive maintenance
Speakers: Kenny Gross, Bernie Alen
MSET achieves up to 2,500× reduction in compute cost for anomaly detection (Kenny Gross) MSET provides massive energy savings and early failure prediction, reducing data‑center downtime (Kenny Gross)
Both Kenny and Bernie highlight the same quantitative benefit of the MSET approach. Kenny describes three-order-of-magnitude compute savings and early failure detection [178-184]. Bernie repeats the 2,500-fold cost reduction figure when discussing a use-case [199-203].
POLICY CONTEXT (KNOWLEDGE BASE)
Predictive maintenance use-cases are documented in the Mauritius AI Strategy and industry presentations showing multi-scale edge-tech (MSET) achieving order-of-magnitude compute savings, supporting the claim of cost reduction [S42][S43][S56].
AI tools risk eroding basic skills in education and require curricular adaptation
Speakers: Ayush Gupta, Participant, Bernie Alen
Educational curricula must evolve to incorporate AI tools while preserving fundamental skills (Ayush Gupta) Analogous to calculators, AI can erode basic competencies if over‑relied upon (Ayush Gupta) Concerns about AI‑induced cheating and the need for policy/educational responses (Participant)
The panel agrees that AI, like calculators before it, can diminish foundational competencies if used without safeguards. Ayush draws the calculator analogy and calls for curricula that integrate AI responsibly while keeping core skills intact [625-632]. A participant raises worries about academic cheating and asks whether policies exist to curb misuse [606-618]; Bernie acknowledges the seriousness of the question and its implications for education [619-624].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple studies and policy briefs note the need for curriculum redesign to address skill erosion and to embed critical thinking alongside AI tools in schools [S44][S45][S46][S47].
Similar Viewpoints
Both stress that unchecked AI scaling leads to unreliable, unsafe outcomes and that technical safeguards (determinism, auditability) are needed to make AI trustworthy and environmentally responsible [75-78][390-404].
Speakers: Bernie Alen, Kevin Zane
Current AI infrastructure’s high power, heat, and cooling demands pose a planetary risk (Bernie Alen) Deterministic AI is required for production use to eliminate hallucinations and ensure auditability (Kevin Zane)
Both present algorithmic innovations that replace brute‑force GPU computation with far more efficient methods, demonstrating that software advances can yield orders‑of‑magnitude savings [102-108][154-158][178-184].
Speakers: Anshumali Shrivastava, Kenny Gross
Dynamic sparsity and new attention mathematics enable CPU‑based inference, cutting hardware needs (Anshumali Shrivastava) MSET achieves up to 2,500× reduction in compute cost for anomaly detection (Kenny Gross)
Both highlight the need for policy and curricular changes to mitigate the risk that AI tools undermine learning and academic integrity [625-632][606-618].
Speakers: Ayush Gupta, Participant
Educational curricula must evolve to incorporate AI tools while preserving fundamental skills (Ayush Gupta) Concerns about AI‑induced cheating and the need for policy/educational responses (Participant)
Unexpected Consensus
Deterministic, non‑hallucinating AI methods are already available and can achieve 100 % accuracy in specific use‑cases
Speakers: Bernie Alen, Ayush Gupta, Kenny Gross
Current AI infrastructure’s high power, heat, and cooling demands pose a planetary risk (Bernie Alen) Deterministic AI is required for production use to eliminate hallucinations and ensure auditability (Kevin Zane) MSET achieves up to 2,500× reduction in compute cost for anomaly detection (Kenny Gross)
While many expect hallucinations to be an inherent limitation of LLMs, Bernie cites a 100 % accurate healthcare deployment and Ayush notes that with proper processes AI can be made fully reliable, a view echoed by Kenny’s claim of deterministic-style anomaly detection. This convergence on the existence of practical, hallucination-free solutions was not anticipated given the broader narrative of AI unreliability [562-566][625-632][199-203].
POLICY CONTEXT (KNOWLEDGE BASE)
Claims of 100 % accuracy without GPUs are reported in ‘AI Without the Cost’ but are contested by literature on LLM hallucinations, highlighting ongoing debate over deterministic AI feasibility [S39][S40][S50].
Overall Assessment

The discussion reveals strong convergence on three pillars: (1) the necessity of algorithmic/software optimisation to curb GPU‑centric compute costs; (2) the urgent environmental sustainability challenge posed by current AI hardware scaling; (3) the requirement for robust governance, auditability and data‑sovereignty frameworks. Additional consensus appears around the transformative potential of low‑compute technologies such as MSET and the educational implications of AI adoption.

High consensus on efficiency, sustainability and governance, indicating that participants broadly agree these are the critical levers for responsible AI development. The agreement provides a solid foundation for policy recommendations that prioritise algorithmic innovation, green AI practices, and strong regulatory safeguards.

Differences
Different Viewpoints
Feasibility of non‑hallucinating / deterministic AI
Speakers: Kevin Zane, Anshumali Shrivastava, Bernie Alen
Deterministic AI is required for production use to eliminate hallucinations and ensure auditability (Kevin Zane) Hallucination is inherent to LLMs; cannot be fully eliminated, only mitigated (Anshumali Shrivastava) Non‑hallucinating methods with 100 % accuracy already exist (Bernie Alen)
Kevin argues that deterministic AI, which always returns the same output for a given input, is essential to remove hallucinations and enable auditability [390-404]. Anshumali counters that hallucination is a fundamental property of large language models and can only be reduced, not eliminated, through techniques such as multi-LLM debate and better training [578-586]. Bernie asserts that non-hallucinating, 100 % accurate solutions are already being deployed without GPUs [562-566]. The three speakers therefore disagree on whether hallucinations can be fully removed and on the practicality of achieving that goal.
POLICY CONTEXT (KNOWLEDGE BASE)
Scholarly commentary points to persistent hallucination in large language models and questions the practicality of fully deterministic AI, as discussed in AI hallucination surveys and governance debates [S40][S49][S50].
Preferred technological pathway for sustainable AI (quantum computing vs algorithmic / software optimisation)
Speakers: Bernie Alen, Anshumali Shrivastava, Kenny Gross, Kevin Zane
Quantum computers can solve AI problems with far less energy (Bernie Alen) Dynamic sparsity and new attention math enable CPU‑based inference, reducing need for quantum (Anshumali Shrivastava) MSET achieves massive compute reduction without quantum hardware (Kenny Gross) Algorithmic efficiency is the key to sustainability, not new hardware (Kevin Zane)
Bernie promotes the launch of a quantum enablement centre and claims quantum computers will consume a fraction of the energy of GPU-based simulations [635-638]. Anshumali proposes algorithmic advances-dynamic sparsity and a new attention formulation-that let CPUs outperform GPUs for very large context windows, offering a software-centric route to sustainability [102-108][154-158]. Kenny describes MSET, a multivariate anomaly-detection technique that cuts compute costs by up to 2,500× without any quantum hardware [178-184][199-203]. Kevin stresses that algorithmic breakthroughs, not additional hardware, are required to keep AI scaling sustainable [303-306]. The panel therefore diverges on whether future sustainability should rely on emerging quantum hardware or on software/algorithmic innovations.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy forums contrast quantum-computing roadmaps with near-term algorithmic efficiency strategies, reflecting divergent views on the primary sustainability lever for AI [S52][S53][S34][S35].
Governance focus – data sovereignty vs deterministic auditability
Speakers: Ayush Gupta, Kevin Zane
Enterprise data must stay in‑house and comply with privacy regulations (Ayush Gupta) Deterministic AI provides auditability and safety, emphasis on predictable outputs (Kevin Zane)
Ayush stresses that enterprise AI solutions must keep proprietary data on-premise and adhere to regulations such as GDPR and India’s DPDPI, arguing that public models like ChatGPT cannot be used for sensitive business contexts [362-382]. Kevin, by contrast, focuses on deterministic AI as the primary governance mechanism, arguing that predictable outputs enable auditability and reduce risk, without addressing data localisation or sovereignty concerns [390-404]. The two speakers therefore prioritize different aspects of AI governance-data privacy versus output determinism.
POLICY CONTEXT (KNOWLEDGE BASE)
Data-sovereignty priorities and deterministic auditability requirements are both featured in data-governance and AI-trust frameworks, leading to tension between jurisdictional control and technical verifiability [S37][S38][S48][S49].
Unexpected Differences
Claim of 100 % accuracy without GPUs versus acknowledgment that hallucination is inherent
Speakers: Bernie Alen, Anshumali Shrivastava, Kevin Zane
Non‑hallucinating methods with 100 % accuracy already exist (Bernie Alen) Hallucination is inherent to LLMs; cannot be fully eliminated, only mitigated (Anshumali Shrivastava) Deterministic AI is required for production use to eliminate hallucinations (Kevin Zane)
Bernie’s strong statement that his team achieved 100 % accuracy and that non-hallucinating methods are already available [562-566] was unexpected given the other speakers’ view that hallucination is a fundamental characteristic of LLMs and that deterministic AI is still a work-in-progress. This stark contrast reveals a surprising divergence in perceived maturity of reliable AI solutions.
POLICY CONTEXT (KNOWLEDGE BASE)
The contradictory positions are exemplified by the ‘AI Without the Cost’ claim of perfect accuracy and the broader consensus on inherent hallucination risks in generative models, as documented in AI hallucination research [S39][S40][S41].
Overall Assessment

The panel converged on the urgency of reducing AI compute costs and environmental impact, but diverged on how to achieve reliable, non‑hallucinating outcomes and on the preferred technological roadmap—quantum hardware versus algorithmic/software innovations. Governance priorities also differed, with some emphasizing data sovereignty and others deterministic auditability.

Moderate – while there is broad consensus on the problem (excessive GPU‑centric compute, sustainability, need for governance), the differing technical visions and contrasting views on hallucination mitigation create notable tension that could hinder coordinated policy or industry action.

Partial Agreements
All four speakers agree that AI’s current compute intensity is unsustainable and must be dramatically reduced to curb environmental impact and cost. However, they propose different technical routes: Bernie calls for traditional software optimisation and moving workloads to CPUs/edge devices [6-8][12-16][17-20]; Anshumali suggests algorithmic tricks such as dynamic sparsity and novel attention formulations to make CPUs dominant for large contexts [102-108][154-158]; Kenny presents MSET as a multivariate anomaly‑detection pipeline that slashes compute by 2,500× [178-184][199-203]; Kevin argues that broader algorithmic efficiency is the key lever for sustainability [303-306].
Speakers: Bernie Alen, Anshumali Shrivastava, Kenny Gross, Kevin Zane
Overreliance on GPUs and need for software optimisation (Bernie Alen) Dynamic sparsity and new attention reduce compute (Anshumali Shrivastava) MSET reduces compute 2,500× (Kenny Gross) Algorithmic efficiency lowers power and water use (Kevin Zane)
Takeaways
Key takeaways
AI infrastructure is over‑reliant on large GPU clusters; software‑level optimizations (dynamic sparsity, new attention math) can shift inference to CPUs and edge devices, dramatically cutting cost. Dynamic sparsity and novel attention algorithms reduce quadratic scaling, enabling long context windows on CPUs and breaking the current GPU‑memory/compute bottleneck. MSET (multivariate sensor‑analytics technique) can achieve up to 2,500× compute‑cost reduction for anomaly detection and provides early‑failure prognostics, improving data‑center uptime and sustainability. Model parameter growth far outpaces GPU memory and compute growth; without algorithmic breakthroughs, scaling will hit hard limits and threaten environmental sustainability. Long context windows are identified as the next critical race for complex, common‑sense reasoning and agentic workflows; current attention mechanisms plateau around 1‑10 M tokens. Deterministic AI (predictable, audit‑able outputs) is required for high‑risk domains (legal, medical, defense) to eliminate hallucinations and false alarms. Enterprise AI adoption should follow a structured multi‑stage roadmap: define aim, assess data quality, choose architecture (CPU/GPU, on‑prem/hyperscaler), pilot, enforce governance/compliance (GDPR, DPDPI, AI Acts), then scale to production. Governance challenges include data sovereignty, privacy regulations, and managing false alarms in human‑in‑the‑loop systems; compliance must be baked into AI projects from the start. Educational systems must adapt to AI tools, preserving fundamental skills while leveraging AI for enhanced problem‑solving and creativity, similar to the calculator transition. Quantum computing is viewed as a potential low‑energy future path for AI, but remains a long‑term research area; current focus is on algorithmic efficiency.
Resolutions and action items
STEM Practice Company will offer guidance and a “blind bake‑off” using client data to demonstrate MSET’s cost and accuracy benefits. Panelists (especially Anshumali and Kenny) will share upcoming research papers on dynamic sparsity and new attention mechanisms with interested participants. Abhideep outlined a concrete multi‑stage AI adoption process that organizations can follow; participants were encouraged to adopt this roadmap. Ayush and the team will work on integrating agentic data‑analysis platforms with enterprise data sources, aiming to reduce reliance on multiple data warehouses. Bernie invited attendees to visit the STEM Practice booth (Hall 6, Stall 100) for further discussions and material. Commitment to explore deterministic AI frameworks and to embed compliance checks (GDPR, DPDPI, upcoming AI Acts) into AI pipelines.
Unresolved issues
How to achieve truly large (e.g., 100 M token) context windows at acceptable latency without prohibitive cost. Concrete standards or frameworks for deterministic AI that balance zero‑hallucination guarantees with computational expense. Exact timelines and enforcement mechanisms for emerging regulations such as India’s DPDPI and global AI Acts. Seamless plug‑and‑play integration of MSET/MSET‑based anomaly detection with existing LLM‑centric RAG pipelines. Policy and pedagogical approaches for preventing academic misuse of AI tools while still leveraging their benefits. The relationship between AGI development and quantum computing; whether quantum hardware is a prerequisite for AGI. Scalable, low‑cost trial‑and‑error experimentation environments for AI research without massive energy consumption.
Suggested compromises
Use mixture‑of‑experts (dynamic sparsity) as an interim band‑aid while longer‑term attention reforms are developed. Deploy GPUs for short‑context, high‑throughput tasks and switch to CPU‑based algorithms for very long‑context workloads. Accept a small, bounded hallucination rate in exchange for lower cost, while providing reliability flags to trigger human review. Combine open‑source models with enterprise‑specific fine‑tuning to retain data sovereignty while avoiding full‑scale GPU inference. Adopt deterministic AI wrappers that enforce rule‑based constraints on probabilistic LLM outputs, reducing but not eliminating hallucinations. Leverage AI as an assistive tool in education (e.g., calculators) rather than a replacement, focusing curricula on higher‑order problem solving.
Thought Provoking Comments
Infrastructure cost is a very important topic because everyone is racing to get as many GPUs as possible, ignoring software optimization that could let us run AI on CPUs, edge devices, or even mobile phones.
Sets the overarching problem of unsustainable GPU‑centric AI development and introduces the idea that decades‑old mathematical optimization techniques can dramatically reduce hardware needs.
Framed the entire discussion around cost, sustainability, and the need for software‑level solutions, prompting the panel to present alternatives (dynamic sparsity, MSET, deterministic AI) and steering the conversation toward optimization rather than raw compute.
Speaker: Bernie Alen
The rate of growth of GPU memory and compute is nowhere close to the exponential growth of LLM parameter counts; we are hitting a memory wall and quadratic attention costs that GPUs cannot overcome.
Highlights a fundamental scalability bottleneck in current AI hardware, backed by data visualizations, and calls for new mathematics (dynamic sparsity, new attention mechanisms) to break the plateau.
Shifted the dialogue from merely reducing costs to addressing a structural limitation of the AI ecosystem, leading to deeper discussion of novel algorithms, the limits of mixture‑of‑experts, and the need for CPU‑friendly attention methods.
Speaker: Anshumali Shrivastava
The next race is the context window: larger context windows enable complex, multi‑step reasoning and common‑sense workflows, but current windows have plateaued around 1 million tokens.
Identifies a concrete, forward‑looking metric (context length) that directly impacts the capability of AI agents to perform sophisticated tasks, linking hardware limits to functional AI limits.
Prompted panelists to discuss how longer context can improve agentic automation and reliability, and motivated the audience to consider research on new attention math that scales sub‑quadratically.
Speaker: Anshumali Shrivastava
We achieved a 2,500‑fold reduction in compute cost for an anomaly‑detection use case using our AI MSET method, plus early‑warning prognostics that prevent costly downtime.
Provides a concrete, dramatic quantitative result that validates the earlier claim about software‑level optimization and demonstrates real‑world impact beyond theoretical discussion.
Validated the feasibility of the cost‑reduction narrative, encouraged interest in the MSET technology, and led to follow‑up questions about governance, sensor data, and broader applicability.
Speaker: Kenny Gross
Deterministic AI is an architectural response that guarantees the same output for the same input, enabling auditability and eliminating hallucinations for safety‑critical domains.
Introduces a clear, actionable design principle that directly addresses the major concern of hallucinations and regulatory compliance, contrasting probabilistic LLMs with a predictable alternative.
Redirected the conversation toward governance and reliability, influencing subsequent remarks on false alarms, policy, and the need for deterministic pipelines in production.
Speaker: Kevin Zane
Agentic data analysis platforms can replace traditional data‑warehouse pipelines by directly querying native sources (tables, PDFs, images) and delivering high‑quality, reliable insights at a fraction of the cost per conversation.
Connects the cost‑reduction theme to a specific enterprise use case, showing how eliminating GPU‑heavy inference can democratize AI‑driven analytics, especially in cost‑sensitive markets like India.
Expanded the discussion from infrastructure to business value, prompting questions about governance, data sovereignty, and the scalability of such platforms.
Speaker: Ayush Gupta
Hallucinations are inherent to probabilistic LLMs; we can reduce them by using multiple LLMs that debate each other, but the barrier is the huge compute cost required for such redundancy.
Provides a nuanced view that acknowledges the inevitability of hallucinations while offering a concrete mitigation strategy, linking back to the earlier cost‑efficiency concerns.
Re‑emphasized the central trade‑off between reliability and compute expense, leading to further dialogue on trial‑and‑error, energy consumption, and the need for more efficient algorithms.
Speaker: Anshumali Shrivastava
The rapid adoption of GPU‑heavy AI is causing massive power, water, and cooling demands, threatening planetary sustainability; software‑level efficiency is the responsible path forward.
Frames the technical discussion within a broader environmental and ethical context, turning the conversation into a matter of global responsibility.
Motivated participants to stress sustainability in their solutions (e.g., CPU‑based inference, MSET, deterministic AI) and set the tone for concluding remarks about constrained resources.
Speaker: Bernie Alen
Overall Assessment

The discussion was driven by a series of pivotal comments that moved the conversation from a generic concern about AI cost to a deep technical and ethical analysis of scalability, reliability, and sustainability. Bernie Alen’s opening remark framed the problem, while Anshumali’s data‑driven exposition of hardware limits and the context‑window bottleneck introduced a clear technical challenge. Kenny Gross’s 2,500× compute‑reduction example and Kevin Zane’s deterministic AI principle offered concrete, solution‑oriented counterpoints, prompting the panel to explore practical implementations (MSET, agentic analytics) and governance implications. Subsequent remarks on hallucinations, energy impact, and enterprise value reinforced the central theme: without innovative algorithmic and architectural changes, the AI boom is unsustainable. These key insights shaped the flow, steering the dialogue toward actionable research directions and responsible deployment strategies.

Follow-up Questions
How can we reduce reliance on GPU clusters by applying software optimization techniques such as dynamic sparsity and new attention mathematics?
Addressing the high infrastructure cost and environmental impact of GPU‑heavy AI deployments by leveraging existing mathematical methods.
Speaker: Bernie Alen
What new mathematical approaches to attention (e.g., dynamic sparsity, sketched attention) can break the quadratic complexity barrier and enable much larger context windows?
Current attention scales quadratically, limiting context size; new math could unlock longer context and more complex tasks.
Speaker: Anshumali Shrivastava
How can deterministic AI architectures be designed to provide predictable, audit‑able outputs and eliminate hallucinations for high‑stakes applications?
Probabilistic LLM outputs are unsuitable for domains like healthcare or security; deterministic methods aim to ensure reliability.
Speaker: Kevin Zane
What governance frameworks and compliance processes (GDPR, DPDP, AI Act) are needed for enterprise AI deployments, especially when using open‑source or proprietary models?
Ensuring data privacy, regulatory adherence, and guardrails is essential for safe AI adoption in regulated industries.
Speaker: Abhideep Rastogi, Kenny Gross, Ayush Gupta
How can MSET (multivariate sensor anomaly detection) be integrated with existing LLM services and plug‑and‑play data‑engineering ecosystems?
Seamless integration would allow organizations to combine sensor‑based prognostics with language models without extensive re‑engineering.
Speaker: Participant (question), Bernie Alen
What open‑source or open‑ecosystem tools enable CPU‑based inference at scale to dramatically lower compute costs?
Providing affordable inference alternatives reduces energy consumption and makes AI accessible to cost‑sensitive markets.
Speaker: Kenny Gross, Bernie Alen
When will India’s DPDP (Data Protection) act be enforced, and how will its timeline affect AI system design and compliance?
Regulatory certainty is needed for companies to plan data‑handling and AI governance strategies.
Speaker: Participant (question), Abhideep Rastogi
What research directions should mathematicians pursue to advance AI, particularly in formal reasoning, LLM capabilities, and foundational understanding?
Mathematical insights can drive new algorithms and improve reasoning, safety, and efficiency of AI systems.
Speaker: Participant (math student), Anshumali Shrivastava
How should education systems adapt curricula and policies to balance AI tool usage with learning outcomes, preventing over‑reliance while leveraging AI benefits?
Ensuring students develop core skills while using AI responsibly is critical for future workforce readiness.
Speaker: Participant (question), Ayush Gupta, Anshumali Shrivastava
What is the relationship between AGI development and quantum computing; can quantum processors enable energy‑efficient AGI?
Exploring quantum acceleration could provide a path to more powerful, less energy‑intensive AI.
Speaker: Participant (question), Bernie Alen
How can large‑scale trial‑and‑error experimentation be made affordable (low energy, low cost) to accelerate AI research progress?
Current compute costs limit rapid experimentation; cheaper methods would speed innovation.
Speaker: Anshumali Shrivastava
What techniques can achieve 100 % reliability (detecting likely errors) even when model accuracy is below perfect, especially for critical decision‑making?
Reliability mechanisms (e.g., confidence scoring, audit trails) are needed to trust AI outputs in high‑risk domains.
Speaker: Ayush Gupta
How can AI inference be scaled sustainably for massive user bases (e.g., hundreds of thousands of participants) without overwhelming power and cooling resources?
Sustainable scaling is required to meet growing demand while minimizing environmental impact.
Speaker: Bernie Alen
What methods can reduce false alarm rates in multivariate sensor monitoring at scale, and how can MSET achieve near‑optimal false‑alarm/missed‑alarm probabilities?
High false‑alarm rates cause operational inefficiencies and safety risks; improving detection accuracy is vital for complex systems.
Speaker: Kenny Gross

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI Meets Agriculture Building Food Security and Climate Resilien

AI Meets Agriculture Building Food Security and Climate Resilien

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session convened to explore how artificial intelligence can bolster food and climate resilience in Indian agriculture, noting that climate change is heightening farming risks while digital tools are advancing rapidly [7-10][12-13]. Chief Minister Devendra Fadnavis announced the Maha Agri AI Policy 2025-29, which integrates AI into advisory services, market data, traceability and research, and highlighted that over 2.5 million farmers already use the Mahavistar AI-powered platform in Marathi and a tribal language [9-20][22-24].


Fadnavis outlined AI’s potential to deliver hyper-local weather forecasts, pest alerts, precision irrigation, credit scoring and transparent supply-chains, but stressed that trustworthy data, ethical governance and public accountability are essential for scaling [53-57]. Maharashtra is creating a statewide interoperable agriculture data exchange (Maha AgEx) built on open standards to empower rather than exploit farmers [64-66], and a traceability digital public infrastructure (DPI) blueprint will provide end-to-end visibility across value chains and be replicable for the Global South [68-70].


Johannes Jett of the World Bank highlighted the government’s role in setting standards, ensuring digital literacy and credibility, while the private sector can contribute creativity and capital; he cited a Moroccan app that uses a tomato photo to prescribe water as an example of innovative, farmer-focused solutions [154-172][176-178].


Dr Soumya Swaminathan warned that women farmers often lack land titles and digital footprints, so AI systems must deliberately incorporate women’s data to avoid exclusion and should be evaluated for bias, drudgery reduction and inclusivity; she pointed to the “Women Connect” app that empowers fisher-women with market information [219-223][229-251].


Shankar Maruwada explained that open, interoperable platforms such as Sunbird and the railway-style “open rails” model underpin India’s DPI, enabling scalable AI deployments like Bharatvistar and Mahavistar; he advocated a minimum-viable AI rollout that improves through data and usage, allowing states to adopt third-party innovations via shared networks [300-307][312-314][316-322]. The panel concluded that moving from pilots to platform-scale, with responsible governance, open standards and inclusive design, is crucial for achieving food security, climate resilience and equitable farmer incomes [84-86][133-138].


Keypoints


Major discussion points


Scaling AI-driven advisory and data platforms in agriculture – Maharashtra’s “Maha Agri AI Policy 2025-2029” and the AI-powered Mahavistar/BharatVistar platforms are being rolled out to millions of farmers, delivering multilingual weather, pest and market advisories and linking to government schemes [19-24][57-62][119-133].


Building responsible, open and interoperable digital infrastructure – The speakers stressed that AI must run on trusted, open-standards “Digital Public Infrastructure” (DPI) with strong data governance, traceability and auditability, using a federated data-exchange (Maha AgEx) and farmer-ID system to ensure data empowerment rather than exploitation [65-66][76-78][135-138][300-306].


Ensuring inclusion of smallholders and women farmers – Smallholder challenges (fragmented information, credit, climate risk) were highlighted, and concrete measures were proposed to embed women’s data, reduce drudgery, and involve women’s groups in design, testing and governance of AI tools [49-52][206-214][219-226][229-236].


Mobilising multi-stakeholder collaboration – The dialogue called for coordinated action among central and state governments, the World Bank, development partners, private innovators and impact investors to co-develop use-cases, fund pilots and scale solutions globally [71-75][168-176][311-317].


Addressing practical challenges: digital literacy, connectivity and “digital red-tapism” – The need to simplify multiple scheme apps into a single AI-enabled interface, improve rural connectivity, and provide training for low-literacy users were identified as critical hurdles to adoption [122-130][154-166].


Overall purpose / goal of the discussion


The session aimed to move “from vision to implementation” by institutionalising AI within India’s agricultural ecosystem, creating a scalable, trustworthy public-sector AI architecture that boosts food and nutrition security, farmer incomes and climate resilience while fostering South-South knowledge exchange.


Overall tone


The conversation was largely optimistic and collaborative, celebrating existing achievements (e.g., Mahavistar’s 2.5 million users) and the ambition to become a global AI-agri hub. Throughout the dialogue, speakers interwove cautious notes about trust, data governance, inclusion and on-the-ground challenges, resulting in a tone that combined enthusiasm with a responsible, problem-solving mindset. The tone remained consistent, shifting only to a more cautionary emphasis when discussing barriers such as digital literacy and “digital red-tapism.”


Speakers

Vikas Chandra Rastogi


Area of Expertise: Agricultural policy, AI integration in agriculture, public sector leadership


Role/Title: Secretary, Ministry of Agriculture and Farmers’ Welfare, Government of Maharashtra; Moderator/Host of the session and panel discussion [S1][S2]


Devesh Chaturvedi


Area of Expertise: Agricultural policy, digital agriculture, AI-enabled public infrastructure


Role/Title: Secretary, Ministry of Agriculture and Farmer Welfare, Government of India [S3][S4][S5]


Johannes Zutt


Area of Expertise: International development, finance, AI for agriculture


Role/Title: Regional Vice President, World Bank [S6][S7]


Dr. Soumya Swaminathan


Area of Expertise: Agricultural science, sustainable development, women’s empowerment in farming


Role/Title: Chairperson, Dr. M.S. Swaminathan Research Foundation; Global leader in science and advocate for women farmers [S8][S9]


Shankar Maruwada


Area of Expertise: Digital public infrastructure, open-source platforms, AI ecosystem design


Role/Title: Co-founder and CEO, Ekstey Foundation (also referred to as XTEP Foundation); Key contributor to India’s DPI landscape [S10][S11][S12]


Devendra Fadnavis


Area of Expertise: State-level governance, agricultural innovation, AI policy implementation


Role/Title: Honorable Chief Minister of Maharashtra [S13][S14]


Additional speakers:


Dr. Devish Chaturvedi – Secretary, Ministry of Agriculture and Farmers’ Welfare (appears in transcript with a spelling variation)


Johannes Jett – Regional Vice President, World Bank (name variation in transcript)


Jonas Jett – Mentioned in the opening; likely the same World Bank representative


Ashish Shailar – Honourable Minister (specific portfolio not detailed)


Nitesh Rane – Minister (specific portfolio not detailed)


Rajesh Agarwal – Mentioned among dignitaries; role not specified


Shubhati Swaminathan – Listed among panelists; role not specified in transcript


Shushankar Maruwada – Likely the same individual as Shankar Maruwada; name variation


Shashi Shailar – Mentioned among colleagues; role not specified


Other unnamed participants – Various officials and dignitaries referenced only by title or honorific without specific names.


Full session reportComprehensive analysis and detailed insights

The session opened with Vikas Chandra Rastogi welcoming a broad audience of national and international dignitaries and framing the discussion around the urgent need to strengthen food and climate resilience in Indian agriculture. He noted that climate change is making farming increasingly risky, that resources are limited and markets are shifting rapidly, yet digital tools and artificial intelligence (AI) are advancing fast and present a strategic opportunity for India to secure food and nutrition, raise farmer incomes and stabilise the economy [7-13][15-16].


Maharashtra’s leadership under Chief Minister Devendra Fadnavis was highlighted as a concrete example of this vision. The state has launched the Maha Agri AI Policy 2025-2029, which embeds AI across advisory services, market information, data exchange, product traceability, research and capacity-building [19-21][57-62]. The AI-powered Mahavistar platform, now used by more than 2.5 million farmers, delivers personalised advisories in Marathi and, more recently, in the tribal language Bili, while Agristrack links farmers to government schemes [22-24][57-62]. A statewide interoperable agriculture data exchange, Maha AgEx, built on open standards, strong data-governance and a consent-driven model, is intended to bring diverse datasets together for a “big picture” view of the sector [25-27][64-66].


In his address, Chief Minister Fadnavis described agriculture as a defining challenge for the Global South, citing climate volatility, falling water tables, deteriorating soil health and fragile supply chains [37-42]. He argued that AI can provide hyper-local weather forecasts, early pest warnings, precision irrigation and fertiliser guidance, credit scoring based on crop intelligence and transparent, traceable supply chains [53-55]. Emphasising that AI is not a magic solution, he recalled the Prime Minister’s reminder that trustworthy data, ethical governance and public accountability are prerequisites for scaling [55-57]. The policy adopts a four-pillar framework-(i) responsible governance, (ii) open and interoperable digital infrastructure, (iii) investment and scaling, (iv) gender-inclusive design-and showcases predictive governance through early-warning systems for cotton growers [58-62]. A traceability digital public infrastructure (DPI) blueprint will ensure end-to-end visibility across value chains and is designed as a replicable public-good model for the Global South [68-70]. The state also issued a global call for AI use-cases, producing a compendium of successful applications from Africa, Asia and Latin America, and outlined the AI for Agri 2026 vision centred on the four pillars [71-78][79-82].


Rastogi then introduced the panel, noting the presence of senior policymakers, World Bank representatives, scientific leaders and digital-public-infrastructure innovators, and set the agenda to move from vision to implementation, focusing on institutionalising AI at scale, ensuring inclusion of smallholders and women, building trustworthy governance ecosystems and strengthening centre-state and global collaborations [87-106][107-110].


Secretary Devesh Chaturvedi elaborated on the national digital agriculture mission. He praised Maharashtra’s leadership in creating farmer IDs and the Mahavistar precursor to Bharatvistar, and announced the launch of Bharatvistar – an integrated AI-based system delivering weather, crop, pest and market advisories as well as scheme information via both Android apps and basic mobile telephony [119-124][125-133]. Chaturvedi warned that earlier digitisation created “digital red-tapism” with multiple scheme-specific apps and databases, which fragmented service delivery and confused farmers [122-130]. By consolidating all advisories, scheme details and market rates on a single AI-enabled platform, the government aims to eliminate this fragmentation [131-138]. He highlighted the development of roughly nine crore farmer IDs, describing the Agri-Stack as the agricultural analogue of the UPI, where each ID links to land, crop, soil-health and other records, thereby empowering farmers to access services without repeated verification [135-140]. Predictive models, such as a monsoon-forecasting engine that successfully guided 3.8 crore farmers, will be expanded to provide more granular market and weather advice, improving productivity and reducing costs [136-138].


Johannes Jett (World Bank) underscored the government’s central role in setting standards for AI governance, ensuring digital literacy, and guaranteeing that advisory content is scientifically credible [154-166]. He praised the private sector’s creativity, urging a “let a thousand flowers bloom” approach that encourages diverse, farmer-focused applications – exemplified by a Moroccan app that estimates tomato water needs from a simple photograph [170-176][177-180]. The World Bank can contribute financing, provide an AI sandbox for truth-testing, and help validate that AI solutions deliver real productivity gains to farmers [181-188][189-196].


Dr Soumya Swaminathan highlighted gender equity as a critical dimension of AI-driven agriculture. She noted that most land titles remain in men’s names, meaning that algorithms trained on existing public data would exclude three-quarters of women farmers unless women’s land-ownership and tenancy data are deliberately captured [219-224]. AI tools must therefore be designed to reduce women’s drudgery, improve market access and be co-created with women’s organisations; she cited the “Women Connect” app that equips fisher-women with market information as a successful model [247-254]. Swaminathan called for rigorous, clinical-trial-like evaluation of AI systems to detect bias, unintended risks and to ensure that humans remain “in the loop” to preserve employment and contextual knowledge [255-263][264-266].


Shankar Maruwada framed India’s Digital Public Infrastructure (DPI) as the backbone for scalable AI, drawing an analogy with the Indian Railways: an open, interoperable “rail” network that allows any state or private actor to plug in services [300-307][308-314]. He stressed that open-source standards such as Sunbird and Beacon enable a federated architecture where data and applications can be shared across states, avoiding siloed portals [315-322]. By first deploying a minimum-viable AI solution, gathering data and iterating, the ecosystem can evolve organically, with successful private-sector innovations (e.g., the tomato-water app) being rapidly diffused through the shared rails [323-330][331-338]. This approach positions India as a laboratory for responsible, population-scale AI deployment [332].


Across the discussion, speakers reinforced common themes: the necessity of open, interoperable digital infrastructure-from farmer IDs and the Agri-Stack to Maha AgEx and Mahavistar’s feedback loop-to scale AI and enable research, startups and policy coordination [76-78][135-140][300-304][268-269][181-184]; the importance of building AI on trusted, transparent, auditable and explainable foundations, with governments responsible for governance, digital literacy and scientific credibility [55-57][154-166][298-317][122-130]; and the priority of gender-inclusive design, including capturing women’s land data, reducing drudgery and involving women’s groups in co-design [76-78][219-224][225-232][247-254][269-271]. Participants also agreed that public-private partnerships and investment-from venture capital, impact funds and multilateral banks-are vital to move AI platforms, traceability modules and agri-tech startups from pilots to scale [78-80][178-180][320-327][181-186][187-188].


The panel distilled several key take-aways. AI is positioned as a strategic lever for food security, climate resilience and farmer incomes, with Maharashtra’s Mahavistar platform already reaching over 2.5 million users in multiple languages [19-24][57-62]. The four-pillar framework (responsible governance, open and interoperable digital infrastructure, investment and scaling, gender-inclusive design) guides the rollout of AI-enabled services such as Bharatvistar and predictive models [76-78][135-140][300-307]. Open, federated architectures (Maha AgEx, Sunbird, Beacon) constitute the backbone for population-scale AI and data sharing across states, research institutions and startups [300-307][308-314]. Trustworthy, transparent and auditable AI is essential for public confidence [55-57][154-166]. Women’s exclusion due to land-ownership gaps must be remedied by deliberately integrating women’s data and co-designing tools that reduce drudgery and improve market access [219-226][247-254]. Private-sector creativity, supported by World Bank financing and AI sandboxes, will enrich the ecosystem, while venture capital and impact investors are invited to fund scaling of AI platforms and traceability modules [78-80][178-180][320-327]. The AI 4 Agree conference, scheduled for 22-23 Feb 2026 at the Jio World Convention Centre in Mumbai, will serve as a Global-South knowledge-exchange platform to showcase successful use-cases and attract further collaboration [71-75][181-186][187-188].


In conclusion, the speakers’ remarks were largely complementary and reinforced shared priorities. The session closed with a call to move from pilots to scalable, interoperable AI services and an invitation to the AI 4 Agree conference for further collaboration [85-86][333-334].


Session transcriptComplete transcript of the session
Vikas Chandra Rastogi

May I invite Dr. Devish Chaturvedi, Secretary, Ministry of Agriculture and Farmers’ Welfare. Sir, please come onto the stage. Sir, please come onto the stage. Johannes Jett, Regional Vice President, World Bank. stage please. Honourable Chief Minister of Maharashtra, Shri Devendra Farnavis Ji. Honourable Minister, Shri Ashish Shailar Ji, Shri Nitesh Rane Ji. Our distinguished guests from India and around the world, very good morning. On behalf of the Government of Maharashtra, I welcome you to the session on Using AI for Food and Climate Resilience. Agriculture is at a turning point. Climate change is making farming riskier, resources are limited and markets are changing quickly. However, there is an opportunity. Digital tools and AI are advancing fast. Our goal is not just to use AI tools.

We must build intelligence into our public systems to help everyone. For India, the change is essential. It is the key to food and nutrition security, higher farmer incomes, and a stable economy. India has shown that digital systems work when they are open and well -governed. Our next step is to bring AI into this framework in a responsible way. Under the leadership of Honourable Chief Minister of Maharashtra, the state has launched the Maha Agri AI Policy 2025 -2029. This policy uses AI for farmer advisory services, market information, data exchange, product traceability, innovation and research, and creating capacities of stakeholders. Thank you. We are moving beyond pilots to project… at full scale. Mahavistar is the country’s first AI -powered network and information and advisory services.

Today, Mahavistar is being used by more than 2 .5 million farmers to get advisories in Marathi language and recently, the first tribal language in the country, Bili, has also been integrated into Mahavistar. Agristrack is helping farmers to get seamless access to various schemes and services. The Maha AgEx, which is an open, federated and consent -driven architecture for data exchange, it is helping us to bring diverse data sets together to get us a big picture. Agriculture is now a key part of India’s AI mission. We are proud to work with the Government of India to lead this change. I want to thank the Ministry of Electronics and Information Technology, Ministry of Agriculture, Extra Foundation, and the Department of Agriculture, and the Department of Agriculture, and the Department of Agriculture, and the Department of Agriculture, and the Department of Agriculture, and the Department of Agriculture, and the Department of Agriculture, and the Department of Agriculture, the World Bank, MS Swaminathan Research Foundation, the Gates Foundation, and all our partners for their support.

It is now my duty to invite our Honorable Chief Minister to the stage. He will share his vision for using AI to strengthen our food systems and protect our climate. After the address of Honorable Chief Minister, we have a panel discussion with our distinguished panelists. Welcome.

Devendra Fadnavis

A very good morning to all of you. Shri Devesh Chaturvedi, Rajesh Agarwal, Vikas Rastogi, Mr. Jonas Jett, Shubhati Swaminathan, Shushankar Maruwada, my colleagues, Shashi Shailarji, Nitesh Raneji, all the dignitaries present here. Namaskar and good morning to everyone. It is my privilege to address this distinguished gathering at the India AI Impact Summit and this important session on AI in Agriculture. We meet at a very defining moment across the world. Food systems are under strain. Climate volatility is intensifying. Water tables are falling. Soil health is deteriorating. Supply chains are fragile and global markets are unpredictable. For countries from the global south, agriculture is not merely an economic challenge. sector. It is livelihood, social stability, and national security.

India understands this very deeply. And under the visionary leadership of our Honorable Prime Minister Narendra Modi, India has placed digital public infrastructure and responsible AI at the center stage of national development. The India AI mission is about using technology to deliver inclusion, transparency, and scale. Today, agriculture must sit at the heart of this mission. Over half a billion Indians depend directly or indirectly on agriculture. Yet, smallholders face fragmented information, rising input costs, climate uncertainty, and limited access to credit and market. Traditional extension systems, however committed, cannot match the scale and the speed required. Artificial intelligence changes this equation. AI can provide hyperlocal weather predictions, early pest outbreaks, warnings, precision irrigation and fertilizer guidance, credit scoring based on crop intelligence, transparent traceable supply chains, real -time market advisories.

But let me emphasize, AI is not a magic. As Honorable PM said in his inaugural session, AI must be built on trusted data, ethical governance. And public accountability. Without trust, scale will not happen. Last year, Maharashtra made a very clear and decisive strategic decision AI in agriculture must not remain confined to demonstrations or pilots It must reach millions Under our Maha Agri AI policy 2025 -29 We adopted a policy -led ecosystem -driven model Built on openness and interoperability Allow me to share what this has meant in practice As rightly told by our Secretary Maha Vistar Our AI -powered mobile platform delivers multilingual personalized advisories Market intelligence, pest alerts and access to government services More than 2 .5 million downloads Acting as a platform for AI -powered mobile platform The Maha Agri AI is a platform for AI -powered mobile platform The Maha Agri AI is a platform for AI -powered mobile platform digital friend to all these farmers.

This demonstrates one thing very clearly. Farmers are ready for AI. When AI is designed for them, AI -based pest surveillance, crop sap integration is our mantra. By integrating geospatial analytics with post -surveillance, we have delivered early warnings to cotton -growing farmers, reducing crop vulnerability and finance risk. This is predictive governance in action. Agriculture data exchange is also one thing which is defining this step. We are building a statewide interoperable agriculture data exchange. We are building a statewide interoperable agriculture data exchange. based on open standards and strong data governance. Data must empower farmers, not exploit them. Traceability digital public infrastructure in today’s global markets, the transparency is a mantra. We are unveiling a blueprint for a traceability DPI that will ensure end -to -end visibility across value chains, enhancing food safety, export competitiveness, and consumer trust.

And this is not proprietary. It is being designed as a replicable public infrastructure model for India and the entire global south. In partnership with India AI, by mission, the Government of Maharashtra the World Bank, and the Wadhani AI, we launched a global call for AI use cases in agriculture. The resulting compendium of real -world AI applications in agriculture was released in Delhi on 17 February 2026. This compendium documents successful AI deployments from Africa, Asia, Latin America, and beyond. India is convening global knowledge for the benefit of the global south. As we move towards AI for Agri 2026 in Mumbai, our vision rests on four pillars. Responsible governance. AI must be transparent, auditable, and explainable. Open and interoperable digital infrastructure.

innovation cannot scale in silos investment and scaling technology without capital remains just a theory and inclusion and gender equity is also a mantra 2026 is the international year of women in agriculture AI solutions must be designed with women farmers not merely for them Maharashtra today presents one of the most compelling agri -innovation ecosystems globally 150 lakh hectares of cultivated land diverse agro -climatic conditions leading agriculture universities and AI research centers a vibrant startup ecosystem a clear regulatory framework and single window facilities a vision for investors and a vision for the future We invite venture capital funds, impact investors, multilateral development banks, corporate innovation arms, and philanthropic foundations to partner with us. And in this partnership, we envisage scaling AI advisory platforms, co -developing traceability DPI modules, investing in agri -tech startups, supporting digital literacy, especially among women farmers, building capacity in the rural AI ecosystems.

When you invest in Maharashtra, you invest. In scalable solutions for engaging economies worldwide, food security, climate resilience, and AI governance are deeply connected. that master AI -enabled agriculture will secure farmer incomes and strategic stability. India has the scale, DPI, and democratic governance model to demonstrate how AI can be deployed responsibly at population scale. Maharashtra is proud to be laboratory of that ambition. Friends, this satellite session is a declaration. We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution, from intention to investment. The government of Maharashtra stands ready to collaborate with the government of India, with states, with global institutions, investors, researchers, and farmer organizations. Let us ensure that AI becomes a force.

for food security, climate

Vikas Chandra Rastogi

Thank you, sir, for your visionary address. You always continue to inspire us to aim higher and achieve better. And under your leadership, I can assure you the Agriculture Department will rise to the challenge and serve the aspirations of more than 15 million farmers of the state of Maharashtra. Thank you so much, sir. We will now start the panel discussion. in a few moments. Thank you. Thank you. Thank you. Thank you. this session. We are fortunate to have with us a distinguished panel representing national policy leadership, global development, scientific expertise, national AI architecture, and digital public infrastructure innovation. Let me introduce the panelists once again. Dr. Devesh Chaturvedi, he is the Secretary, Ministry of Agriculture and Farmer Welfare.

Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice President, World Bank. Mr. Jett brings a vital global perspective on development and finance from the World Bank. Ms. Soumya Swaminathan, she is the Chairperson of Dr. M .S. Swaminathan Research Foundation. Dr. Swaminathan is a global leader in science, a champion for sustainable development, and a strong advocate for mainstreaming women farmers’ roles. in agriculture. Mr. Shankar Maruwala is a co -founder and CEO of Ekstey Foundation. He is a pioneer in building digital public infrastructure that empowers people at scale and I am very proud to say that the government of Maharashtra and Ekstey Foundation together have brought out Mahavistar which more than 2 .5 million farmers are using today to get the advisories and information that they need on a daily basis.

The objective of this panel discussion is to move from vision to implementation. Specifically, we will deliberate on how to institutionalize AI within agriculture systems at scale, how to ensure inclusion, especially of women farmers and smallholders, how to build interoperable, trustworthy and sustainable AI governance ecosystems, and how to strengthen collaboration between the center and the center. states, global institutions, industry, and academia. The session is also an important precursor to AI4Agree 2026 Global Conference where we will continue these deliberations in greater operational depth with governments, investors, innovators, and development partners. AI4Agree Conference is being held in Mumbai on 22nd and 23rd of February at Jio World Convention Center. With this context, let’s begin our discussion. My first question is to Dr.

Devik Chaturvedi. Sir, under your leadership, the ministry has taken significant steps in advancing the digital agriculture mission and operationalizing the Agri -Stack framework. You are laying a strong digital foundation for the sector. As we now look, at integrating AI more systematically into agriculture, how do you envision the central state collaboration framework, specifically to ensure that AI deployments are aligned with national architecture while allowing states the flexibility to innovate based on local agroclimatic and socioeconomic context? And finally, how can we institutionalize this collaboration to achieve population scale impact while mentoring interoperability and data trust?

Devesh Chaturvedi

Thank you. A lot of questions in the same question. So what I’ll do is I’ll just first take you through the initiatives. First of all, we deeply appreciate the leadership taken by Maharashtra under obviously the leadership of our honorable chief minister and with the agriculture department. They have done exceptional work in digital agriculture mission by developing farmer IDs and digital computers. We’ve done a lot of survey. and also they launched Mahavistar as a precursor of Bharatvistar. And recently on 17th, government of India have also launched one of the first integrated AI -based system for the farmers, which is Bharatvistar, which presently is undertaking providing services both through the app, Android -based app, as well as through mobile telephony on weather advisories, ICR -based crop advisories, pest advisories, market information regarding various agriculture produced, traded in the Mondays, and lastly, the government schemes of government of India.

Now, why is this important, AI is important in agriculture? Like we did a lot of, we started with digitalization of services, different services, we had DBT, we had online systems of applying for various, a common person applying to the common services, and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of But what was felt was that while we had initiated this process to ensure that the bureaucratic red tapism is removed, what we were moving towards was a sort of digital red tapism.

Because within our ministry, different schemes had different apps. And they had different ways of selection. And within the state also, horticulture had a different database of farmers. Agriculture had a different database. Animal health has a different database. Crop insurance has a different database. So basically, a farmer who has to avail so many services, we felt that he or she was getting lost in which app to use for which. And sometimes it becomes more difficult to avail the services through online systems or to get advisories than to go to a person and say, tell me how to do it. So the whole idea was that once we have this AI -based system, we have a same platform for different…

applications and different advisories at a click of the button or maybe just as a voice. So that is the whole idea of shifting towards AI -based solutions. So now what we have initially in the first phase in the artificial intelligence system, the Bharat Vistar or the Mahavistar of Maharashtra, is that the advisories, the crop advisories, the weather advisories, schemes information, information about how to apply and what is the status of that application and also the Monday rates, all these have been put in the one platform. You can just make a presently it is working in English and Hindi but in next three to six months we’ll be taking it towards all the Bhashani related languages.

And the next step is as you mentioned that the states are working together with us for the digital public infrastructure. So close to 9 crore farmer IDs have been developed. So what is a farmer ID and you must have read the statement of Honourable Finance Minister that DPI is the new UPI. so what is the basic this agri stack which is the part of DPI is that for agriculture is that we have each farmer has a unique farmer ID with the back end all the crops the person has sown, what is the land available to that person, all the data with the share of the land and the crop sown and the soil health card details if the soil health has been given so with these basic details available on the system it empowers the farmer through that ID to avail services because it is already approved by the relevant authorities in the government so the person does not have to or the authorities who are giving the services are not required to cross verify the credentials of the farmer based on those those based on the record of rights or maybe the whatever it was in the different states so every state in Maharashtra is one of the leading states here we are working together to have a saturation of farmer IDs and crop survey and once this is there then this AI will further transform into a very very tailored advisor so a person calls or gives the farmer idea to Aadhaar and at the back end we will based on the consent access the details of where the farmer is from, what is the crop being grown, what is soil hand conditions and very targeted advice will be given which will be made operational in next 3 to 6 months so instead of pushing data which may not be of interest of the farmers very specific tailored data for that farmer will be available based on integration of digital public infrastructure with Bharat Vistar and the third aspect will come when we do the predictive models and we tried that and you must have remembered in the inaugural session when Google CEO mentioned about that predictive model which we did about 3 .8 crore farmers we used 100 years data of IMD and a model to predict a monsoon for the next 1 month and for next week and that prediction was fairly accurate and farmers, we got the feedback the farmers did take that decision to sow and to irrigate based on the predictive model which was sent.

And now we will expand the predictive models to ensure that we get more advisories of the market situation, of the weather situation, which will help improving the decision making of the farmers and so that they can increase their productivity, reduce their costs. So that is the whole idea of AI in agriculture. And we hope that more and more farmers will adopt it and it will be not exactly a replacement but a sort of additionality to the human, we can say extension services which we find is not able to reach to the farmers because of the resource constraints of each state. The extension machinery, the KVKs or our state extension machineries, it’s very difficult to reach each and every farmer because of the fact that we can’t have a person sitting in each village reaching to each farmer.

But AI along with digital public infrastructure, along along with the mobile and internet penetration in the various rural areas, will ensure that that gap is removed and we get more and more access to the farmers on

Vikas Chandra Rastogi

model that provide just -in -time support to central and state governments, enabling them to experiment, iterate, and scale AI solutions responsibly.

Johannes Zutt

Thanks very much for those questions, and thank you also for the invitation to be here today. So we’re on the cusp of a major revolution in how support to farmers and agriculture is happening. I actually grew up on a farm. I worked on a farm from the ages 10 to 21. I think every hour I wasn’t in school, that I was actually at home. I was working in a farm. In some ways, it feels paleolithic, because we didn’t have computers. We had telephones that were connected to wires, and our ability to get information about what was happening around us was extremely limited. We spent a lot of time trying to find out the things that today you can find out very, very quickly using small AI for agriculture.

And that’s truly right. evolutionarily empowering for farmers. But, you know, to make that work for farmers, there’s a lot of things that need to go right. And I think it’s worth reflecting a little bit on the different roles that different actors in the ecosystem have, starting obviously with government. My colleague mentioned a number of these things earlier. The government’s responsibility is principally on foundations, communications, things like the governance of AI, the interoperability, obviously ensuring that educational programs include appropriate types of skilling in the use of digital services. This is a big challenge in countries like India, where frankly there are still people who don’t have sufficient literacy to read what comes over a basic smartphone ensuring that the research and extension…

Thank you. that is provided through these small AI platforms is credible, is trustworthy, is backed by science. I think that’s also extremely important. Of course, farmers will find out if they aren’t, but at high expense, right? So we want to make sure that they’re not being advised to do things that are negative for them. And then also looking at the costs of service, the connectivity, what does the farmer actually need to be able to link into these different types of platforms that give information? Because, of course, we’re often also talking about farmers who have very, very few assets and who may be essentially unable to stay permanently connected or who are not able to stay permanently connected.

They’re not able to stay permanently connected or even easily connected to the Internet. They’re going to have very basic smartphones, et cetera. So the government has a lot of… of work to do in all of those areas. Then you can look at what can the private sector do. Now, one thing that the government needs to do is encourage crowded and private sector capacity and capital. But once we turn to the private sector, what is the private sector’s principal advantage? I think that there’s a lot of creativity in the private sector. So the actual applications that are being developed are being developed by individuals in the private sector with a passion for specific sorts of issues that are constraining farmer success.

And that creativity will result in a number of different applications that will be aimed, in most cases, to help farmers overcome certain hurdles that they face. And we can kind of let a thousand flowers bloom there. And see what actually takes root. And it’s amazing what you start to see. Just yesterday I was learning about an application in Morocco developed by a tomato farmer who was able to give advice about how much water tomato plants need simply by taking a picture of the current tomato plant. Take a picture and it tells you how much water you actually need to give this plant, which obviously in a water -stressed environment is vital, vital information. And then there are roles for institutions like my own, the World Bank Group, which can help to provide some of the financing that helps develop these applications and also the foundational backbone for artificial intelligence.

And we can also play a role at the advisory end where we are helping to truth test, if you like, the information that’s coming through different applications that are coming in. Coming out of the AI sandbox in different contexts to make sure that it’s… actually providing information that’s useful to the end beneficiary and enhancing from a productivity perspective at the farm level.

Vikas Chandra Rastogi

Thanks. I think you have rightly pointed out the role of innovation and research. And what we see is we require high -quality, robust data to actually build upon that. And as Honorable Chief Minister mentioned, Maha AgEx is one step in that direction wherein we bring diverse data sets and make them accessible to researchers, academic institutions, departments, and also startups. And many of these startups we will see they are showcasing their innovations in AI for Agri conference in Mumbai. So we’ll request all of you to please come there and see for themselves what kind of excitement they have and what kind of solutions our MDA says. I have one supplementary question to you. How do you see a platform such as…

AI Impact Summit as well as AI for Agree Global Conference, contributing to deeper global collaboration and south -south knowledge exchange in this domain?

Johannes Zutt

Thank you for that additional question. I mean, obviously, India is in a great position to lead the development of AI, particularly for developing countries where there are still significant challenges helping poor people to escape poverty permanently. India has demonstrated digital innovation for a long period of time already. It’s got an enormous population with a huge variety. The challenges of bringing farmer -appropriate data to the farmers’ fingertips in India are… I was going to say India is a microcosm of the rest of the world. It’s hardly a microcosm. It’s so huge. But because you have so many languages, so many different regions, so many different types, so many different cultures, and the starting conditions at the farm level are so incredibly varied, figuring out how to make AI at the farm level work in India will automatically have a large number of spillover learnings for other countries around the world.

And because India, after China and the United States, is the country in the world that is best positioned actually to push all of this work forward, and because it is itself a developing country, it’s very, very clear that it will have a central role to play in South -South learning for those reasons.

Vikas Chandra Rastogi

Thank you so much. I move on to Dr. Swaminathan. Dr. Swaminathan, your father, Professor M .S. Swaminathan, played a historic role in shaping India’s agriculture transformation during the Green Revolution, ensuring food security at a critical juncture in our history. Today, as we speak of a new phase of transformation driven by AI, we are again at an inflection point. You have consistently championed science -based policy, sustainability, and the empowerment of women farmers. With 2026 being recognized internationally as the year of women farmers, how can we ensure that AI -led agriculture transformation strengthens women’s agency, knowledge access, and climate resilience? And what institutional safeguards and design principles must we embed today so that this new technological revolution becomes equitable, farmer -centric, and grounded in scientific integrity?

Dr. Soumya Swaminathan

Thank you very much for that question, Vikasji. Not only is this year the International Year of the Woman Farmer, but we know that agriculture itself is increasingly being feminized, with many men actually leaving farming to the women and migrating out. To the cities for other opportunities. So it is really essential to put women… at the center of all that we are discussing. And I think the Chief Minister today gave us a wonderful vision of what can be the future, provided, of course, like you said, that there are the guardrails, there are the institutions, there are the safeguards and the design principles that we think about from the very beginning. So my father, Professor M .S.

Parminathan, used to say that the Green Revolution was not only about the seeds. Of course, the seeds played a very big role. You know, the high -yielding varieties. But it was about the entire ecosystem and the institutions that were developed at that time, which included the outreach, the, you know, later on the Krishi Vigyan Kendras, of course, were developed, but also the access to credit, the water, the fertilizers, the education and empowerment, and ultimately became a success because farmers realized the potential of it and took it on. So. And what he used to say is that, you know, every technology. No technology is pro -poor or pro -rich or pro -woman or against women. It’s how we use that technology.

So it’s really, like you said, the inflection point today is how do we use this very powerful technology that’s come to us. So I think there are a few points here to make sure particularly that women farmers are not left behind. The first important fact is that women in India, the minority of them who have their name on the land document, so mostly it is in the man’s name, and Deveshji was telling me today that this is improving and that the latest census shows that perhaps at least a quarter of the properties are also in the name of women, either jointly or – but that still means that, you know, three -fourths of them don’t have.

And a system that operates – basically on publicly available data will then leave out those whose data sets are not available. So I think – I think it would be really important at the early stages itself to think about how women’s data can be incorporated. Because the algorithms are fed by the data we have. And so all of these advisories may be very suitable for a man who’s operating a tractor on a farm, but not at all relevant for a woman who’s still working with outdated instruments and trying to, you know, till her land. And particularly when we look at more remote areas, tribal areas, where women do a lot of the agriculture, like millets, for example.

Mostly it is women who grow millets. And there’s still a lot of mechanization which is absent completely. It is all still very much done using traditional methods and tools. And it involves a lot of drudgery. So I would say that, you know, one of the benchmarks that I would look at is, is it reducing the drudgery and the workload on women farmers? Is AI helping to do that? So I think we also need to think about that. We also need to look at certain indicators for success. And you mentioned science. I mean, I’m a medical. researcher and the way that we evaluate products is by doing clinical trials, by examining the data and the evidence and then recommending it for wider use.

So again, a note of caution would be to, as we roll it out, we need innovation certainly. We also need to do the evaluation, looking at inherent biases, looking at who’s being excluded, looking at are there unanticipated risks or side effects that we didn’t know about. But most of all, it’s this inclusion. I think we don’t want those who are already left behind to be further left out. So I think the ongoing research and data collection and feedback loops and most importantly, having the voices of those for whom we are developing all these. I think in the room, I don’t think we have any farmers or women farmers. So we are all discussing from what we know.

But if you’re the farmer, like you were saying, working there and you know the constraints and the which you’re working. So I think the women farmers and farmers in general must have a role. They must be part of these committees that evaluate or make recommendations or make suggestions on improvement. It has to be an iterative process. I think any technology is as good as the application for which it’s developed. I’ll give you one example of an app that the MS Farminathan Research Foundation developed for fisher women. We had a very successful app for fishermen called the Fisher Friendly Mobile App that won the UN Tech for Nature Award last year. But fisher women were as usual left out.

And so the Women Connect app actually gives them on a tablet information that they need to sell. Because once the fishermen have come back from seeds, the women who have to do all of the post -harvest, and the same is true for crops or fruits or vegetables as well. So that connection to the market, of course the information about pests and pathogens and when to buy what and what inputs to use. But also being able to organize themselves. And I think women And there are many FPOs now and FPCs and SHGs made of women farmers, empowering them and giving them the knowledge and tools. And the last thing I would say is we still need humans in the loop.

I don’t think we should think that completely making everything run by machines is going to solve our problems. I think it’s risky there. And in a country like India, we also need employment. And so we should think of, and I don’t know how many of you have seen this film called Humans in the Loop. But it’s a tribal woman from Jharkhand who actually raises questions about the algorithm. It’s a very thought -provoking film. So I think Humans in the Loop is going to be important. We have our Krishis, Sakhis and so on. We need to empower them with these. So I think AI and all these digital tools, if they’re used in addition to the traditional knowledge and wisdom that people have and augment it and give them at the right time, at the right place, the knowledge they need, I think we can go a very long way.

Thank you.

Vikas Chandra Rastogi

Thank you, madam. You have rightly. pointed out the need to be more sensitive while developing systems for inclusivity and to ensure that for whom they are being developed and they are in the loop and they are being consulted. In fact the feedback mechanism that we have developed in Mahavistar takes care of those requirements. I am also very happy to share that Government of Maharashtra and Dr. M .S. Swaminathan and his foundation are working together on some of these issues in terms of how to bring women’s right in farming at the center stage how do we create bio -happiness using our universities and educational systems and what kind of nutritional security we must look for because we have food security but it’s the nutritional security that we must aspire for.

We are happy to have support and assistance from MSSRF. in that direction. My final question is to Mr. Shankar Marubala. Mr. Shankar, XTAP has played a foundational role in shaping India’s DPI landscape through open source platforms such as Sunbird, which has powered large scale systems like Diksha, Mahavistar, and open network initiative built on backend protocol. These efforts have demonstrated how open standards and interoperable architecture can enable population scale transformation that we are already seeing today. As we now enter the era of AI driven public systems, how should we think about standardizing AI based ecosystem in a similar spirit? How can we bring DPI into AI? And what architecture and governance principles are required to ensure interoperability, trust and sustainability in AI deployments across sectors such as agriculture?

Shankar Maruwada

Again, a whole lot of questions, but let me. I’ll make my best attempt to answer those. more than 100 years ago the world faced what was known as a malthusian crisis where malthus the economist predicted that if we continue to grow in the same way we’ll run out of land we’ll run out of soil we were a billion and a half then we are eight billion most of us may not even have heard of the malthusian crisis what happened someone called haber and someone called bosch created a miracle haber synthesized ammonia using high pressure and temperature and bosch put it into an industrial process that phenomena is now historically known as pulling bread out of air it took a lot of effort and as samya said creation of a massive ecosystem germany which pioneered this lost that race to us because us did a better job of diffusing the technology safely to the farmers.

They created the discipline of agriculture engineering. They created institutions like the Fertilizer Development Center. They held technology demonstrations to farmers to show them how synthetic ammonia could be used. By the way, 50 % of the nitrogen in our body comes from synthetic nitrate ammonia. That’s a fact. We owe a lot to Heber and Bosch. China then took it on in the 80s by buying 10 big plants from Kellogg, training 300 million farmers, showing them how to use synthetic fertilizers. They went on to be the global leaders in agriculture. India is at a point where if you learn the lessons from such past things, our green revolution, our DPI experience, we are at a pivotal point where the equivalent of pulling bread out of thin air is pulling intelligence from the earth and providing it to the farmer this is again not science fiction Mahavistar, the pioneer along with Bharatvistar have taken the first steps to this so when a Mahavistar was designed to build off what Swami has said, it was designed with inclusion in mind inclusion, diversity was not an afterthought because to solve for not just Maharashtra’s problems, for India’s scale and diversity, we need to think of the last person the most discriminated in the remotest part of India and design systems that work for them we call that DPI now let me give you a specific example of this in Bharatvistar right from the beginning the design specs was we need an illiterate farmer to build off John’s point about digital literacy with a feature phone not a smart phone to be able to talk in his or her native language and native dialect Marathi itself has many dialects right talk on the phone like the way she is comfortable talking to another person ask the question have a conversation get a bunch of answers that process took us the better part of nine months why because it’s not just AI it’s data it’s processes it’s training the farm extension workers it is having trust on will this work what about the costing will I blow up my entire stage budget on a model right do I have autonomy can I switch models out in and out these are very very difficult questions it took us in partnership with a whole lot of people and we are working on a I mean, Government of Maharashtra led the effort, but IndiAI Mission, Bhashini, IIT Madras, IIIT Hyderabad, World Bank, Google, many other providers, everybody chipped in the little part of the solution.

Now, here’s the best part. Because we all collaboratively invested in figuring out a solution there, that solution could be deployed in Bharat Vistar with more confidence easily. Again, the same challenges that Secretary Chaturvedi talked about, do we have the data? He used a very nice phrase, digital red tapism, right? Our data is in different formats. What matters is the intent of the government. The government of India, which triggered the process, which allowed Bharat Vistar to be launched the day before, it’s a start. Data will get better, the systems will get better, usage will improve, that will generate more data, and then over time, years, the ecosystem will be built. This we know from our experience.

What makes this happen? What is that secret sauce, the design principles? It is the same as DPI. What worked for DPI, we are taking those same principles. One, open interoperable systems. Think networks and not just portals and platforms and siloed and fragmented systems. What’s the best example of this? The railways in India. We have such a vast landscape, but the rails are common. Every state can decide what it wants to move, private, public, defense, farming. The Indian railways is just providing a backbone. That allows. Everyone to. . . . . . . . . do this. There was a time when we had different rail gauges. Right? Now, that sounds so silly, but there was a time like that. India is showing that we don’t have to repeat those early mistakes in digital also.

By creating interoperable networks based on open protocols like Beacon, by collaborating with each other, one of us is bringing in data, somebody is bringing in technology, somebody is bringing in policy, somebody is bringing in research. These collaborative open networks and with the launch of Bharat Vistar puts India in a very unique and responsible position. Unique because we have these open rails. We have the experience of DPI. Responsible because it is a start. Unlike the technologies of the past where you perfect the technology and then deploy AI. you deploy something minimum to start and then evolution models get better, data gets better, usage gets better and then it gets better and better over time. That is the unique junction we are in in India.

What will that mean? When ICAR plugs into this network with its weather and pricing data, that network makes it available to any state that wishes to turn on the supply from ICAR. When a private sector comes out with a very innovative app, let’s say the tomato example that John talked about, any state can say, I like that. I think I will have that made available to my farmers. For the farmers, they anyway trust the state. They can go to the same app and now see this also there. If the tomato app person wants, they can go. They can go directly to each farmer. very expensive. So Shared Rails allows us to spread innovation, diffuse it very quickly through society, keeping in mind both inclusion and rewarding innovation because innovation has to be rewarded.

And I want to end with a very simple analogy. When Edmund Hillary climbed Mount Everest, he made a lot of people believe it is possible. When Mahavistar was launched, it made the country believe that it is possible to make AI serve the farmer. And to that extent, the responsibility that Mahavistar, Maharashtra government and government of India has is to create these pathways for the rest of the country for the other states. At XTEP Foundation, we made a declaration two days ago. We would like to see a world by 2030 where there are hundreds, hundreds such diffusion power. pathways each created by a different set of people in different sectors in different countries and continents but each inspiring different AI pathways to safe impact at scale and it’s a very exciting vision it’s a very collaborative vision if you all get together we can also create miracles in our own lifetime thank you

Vikas Chandra Rastogi

with that profound thought we’ll conclude today’s panel discussion I thank all the panelists they have really opened a new vision in front of all of us and we’ll invite all of you to AI for Agree conference in Mumbai on 22nd thank you so much we don’t have question actually a time to question the next session is about to start we can discuss that Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Vikas Chandra Rastogi served as the moderator/host and is Secretary of the Ministry of Agriculture and Farmers’ Welfare, Government of Maharashtra.”

The knowledge base identifies Vikas Chandra Rastogi as the session moderator and as Secretary of the Ministry of Agriculture and Farmers’ Welfare, confirming his role in the discussion [S2] and [S1].

Confirmedmedium

“Maharashtra’s leadership under Chief Minister Devendra Fadnavis was highlighted as a concrete example of the AI‑driven agricultural vision.”

A source praises the leadership taken by Maharashtra and its agriculture department, acknowledging the chief minister’s role, which supports the claim about state leadership, though the chief minister’s name is not specified [S11].

Confirmedhigh

“AI can provide hyper‑local weather forecasts, early pest warnings, precision irrigation and other advisory services for farmers.”

The World Meteorological Organization notes that AI is recognised for revolutionising weather forecasts and early-warning systems, confirming the claim that AI can deliver hyper-local weather and pest-related advisories [S86].

Additional Contextlow

“Agriculture is a defining challenge for the Global South, with issues such as climate volatility and fragile supply chains.”

Additional context from the knowledge base highlights affordability, rural connectivity and reliability as key challenges in the Global South, adding nuance to the broader statement about systemic agricultural challenges [S84].

External Sources (86)
S1
AI for agriculture Scaling Intelegence for food and climate resiliance — -Vikas Chandra Rastogi: Secretary of Ministry of Agriculture and Farmers Welfare, Government of Maharashtra – leads the …
S2
AI Meets Agriculture Building Food Security and Climate Resilien — -Vikas Chandra Rastogi- Secretary, Ministry of Agriculture and Farmers’ Welfare, Government of Maharashtra (moderator/ho…
S3
AI Meets Agriculture Building Food Security and Climate Resilien — -Devendra Fadnavis- Honorable Chief Minister of Maharashtra -Devesh Chaturvedi- Secretary, Ministry of Agriculture and …
S4
https://dig.watch/event/india-ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — May I invite Dr. Devish Chaturvedi, Secretary, Ministry of Agriculture and Farmers’ Welfare. Sir, please come onto the s…
S5
AI for agriculture Scaling Intelegence for food and climate resiliance — A very good morning to all of you. Shri Devesh Chaturvedi ji, Rajesh Agarwal ji, Vikas Rastogi ji. Mr. Jonas Jett, Srima…
S6
AI Meets Agriculture Building Food Security and Climate Resilien — -Johannes Zutt- Regional Vice President, World Bank
S7
How AI Drives Innovation and Economic Growth — -Johannes Zutt: World Bank representative (referred to as “John” in the discussion)
S8
AI Meets Agriculture Building Food Security and Climate Resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S9
AI for agriculture Scaling Intelegence for food and climate resiliance — -Dr. Soumya Swaminathan: Chairperson of Dr. M.S. Swaminathan Research Foundation – global leader in science, champion fo…
S10
https://dig.watch/event/india-ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S11
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-agriculture-scaling-intelegence-for-food-and-climate-resiliance — So we are happy to have support and assistance from MSSRF in that direction. My final question is to Mr. Shankar Maruwad…
S12
AI Meets Agriculture Building Food Security and Climate Resilien — – Dr. Soumya Swaminathan- Shankar Maruwada Dr. Swaminathan advocates for a cautious, medical research-style evaluation …
S13
AI Meets Agriculture Building Food Security and Climate Resilien — -Devendra Fadnavis- Honorable Chief Minister of Maharashtra
S14
AI for agriculture Scaling Intelegence for food and climate resiliance — – Devendra Fadnavis- Dr. Soumya Swaminathan
S15
AI for food systems — Supporting Smallholder Farmers and Vulnerable Communities
S16
Digital Policy Perspectives — It points out advancements in artificial intelligence (AI) development for non-English languages, facilitating the prote…
S17
Leveraging AI4All_ Pathways to Inclusion — Three interconnected pillars needed: design, access, and investment
S18
Leaders TalkX: The Connectivity Imperative: Laying the Foundation for Inclusive Information Access — Reinforcement of governance, human capital development, and infrastructural enhancements constitute this strategy’s pill…
S19
Artificial intelligence (AI) – UN Security Council — Another session highlighted the need for transparency and accountability in AI algorithms. The speakers advocated for AI…
S20
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — 1. Trust, safety, and accountability: His Excellency Dr. Abdullah bin Sharaf Alghamdi emphasised the need to focus on th…
S21
Driving Indias AI Future Growth Innovation and Impact — Need for explainability and transparency to build user confidence
S22
Reaching and empowering women with digital solutions in the agricultural last mile — GSMAreleaseda new report on how to reach and empower women with digital solutions in the agricultural last mile. The foc…
S23
Ad Hoc Consultation: Tuesday 6th February, Afternoon session — Aligned with Sustainable Development Goal 5, which advocates for gender equality, and Sustainable Development Goal 10, w…
S24
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — “Thank you, Mr. Taneja, for the $5 billion pledge that you have taken.”[68]. “and we’re heavily doubling down on our inv…
S25
Top investor urges boards to strengthen AI competency — Norway’s $1.7 trillion sovereign wealth fund, one of the world’s largest investors, iscallingfor improved AI governance …
S26
Media Briefing: Unlocking ASEAN’s Digital Future – Driving Inclusive Growth and Global Competitiveness / DAVOS 2025 — Christophe De Vusser: Thanks a lot, and it’s been an honor for us to work together with the GAF on this report. And of…
S27
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — Another critical aspect to consider is the management of national agricultural data. Currently, there are challenges rel…
S28
Open Forum #26 High-level review of AI governance from Inter-governmental P — 1. Governments: Responsible for balancing innovation and security, and creating appropriate regulatory frameworks. Andy…
S29
State of Play: AI Governance / DAVOS 2025 — Arthur Mensch: I would say I think we can we can split responsibilities in between industries and governance. The firs…
S30
Digital Public Infrastructure, Policy Harmonization, and Digital Cooperation — Marie Ndé Sene Ahouantchede explains that ECOWAS views public digital infrastructure as built on three pillars: payment …
S31
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — “One of the things we’ve often looked at is having open data sets that can train AI engines would be an important way of…
S32
The Foundation of AI Democratizing Compute Data Infrastructure — This connects AI democratization to broader digital infrastructure development, suggesting that individual data empowerm…
S33
How to believe in the future? — Additionally, Ahmed stresses the importance of supporting smallholder farmers and women in agriculture, recognising thei…
S34
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leadin…
S35
Al and Global Challenges: Ethical Development and Responsible Deployment — Donny Utoyo:and online safety vulnerability, especially for women and children. As AI rapidly transform our lives digita…
S36
WSIS High-Level Dialogue: Multistakeholder Partnerships Driving Digital Transformation — Lastly, the analysis hearteningly acknowledges the necessity for multi-sector stakeholder participation in policymaking….
S37
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — Development | Economic Twenty years ago there was an assumption that the private sector would solve access issues in re…
S38
High Level Leaders Session 3 | IGF 2023 — Garza advocates for reinforcing the multi-stakeholder model in internet policy and regulation. However, she notes that t…
S39
Digital divides &amp; Inclusion — Overall, the analysis highlights the significant challenges posed by the cost of rural broadband connectivity and the ab…
S40
AI, Data Governance, and Innovation for Development — Sade Dada discusses the need for unique funding models to improve connectivity in rural areas. She suggests considering …
S41
Launch of the eTrade Readiness Assessment of Mongolia (UNCTAD) — However, there are still challenges that need to be addressed. Mongolia faces issues such as rural connectivity, interne…
S42
AI for agriculture Scaling Intelegence for food and climate resiliance — “We are building a statewide interoperable agriculture data exchange based on open standards and strong data governance….
S43
AI Meets Agriculture Building Food Security and Climate Resilien — Chief Minister Devendra Fadnavis presented Maharashtra’s Maha Agri AI Policy 2025-2029, emphasizing the shift from demon…
S44
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — Interoperability of systems, both within countries and among countries, is crucial for efficient data management in agri…
S45
Ad Hoc Consultation: Tuesday 6th February, Afternoon session — Aligned with Sustainable Development Goal 5, which advocates for gender equality, and Sustainable Development Goal 10, w…
S46
Public access evolutions – lessons from the last 20 years — Maria Garrido:so this is important context to support of public access and support of the role of public libraries libra…
S47
Measuring Gender Digital Inequality in the Global South — However, it emphasized the importance of elevating the role of leadership in women’s businesses and policymaking towards…
S48
Reflections on teaching science-policy engagement — Science-policy relationships in the 21st century occur within a complex interplay of cognitive, social, and institutiona…
S49
Multistakeholder Partnerships for Thriving AI Ecosystems — All of this was only possible because we had this collaboration between the government with the technical partner who go…
S50
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Policy frameworks and public vs private sector dynamics
S51
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — The participant explains that India is following the same successful approach used for DPI development, where basic buil…
S52
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S53
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — “human in the loop is a first class feature not a failure point … design the system … transition … to a human”[79]…
S54
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — “We need this automation to have an element of human control that is so that the system does not run away with its own d…
S55
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — He argues that AI should augment clinicians while keeping humans central to decision‑making, acknowledging the difficult…
S56
Open Forum #33 Building an International AI Cooperation Ecosystem — **Professor Dai Li Na** from the Shanghai Academy of Social Sciences presented a comprehensive case study of Shanghai’s …
S57
Health diplomacy — Partnerships and Alliances: The practice of health diplomacy involves a multi-stakeholder approach to solving global hea…
S58
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Additionally,public-private partnershipsare essential for scaling sustainability initiatives. Companies invest in on-sit…
S59
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S60
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Gong Ke: Thank you. Based on the observation of my Institute in the past years to the Chinese practices, I think there a…
S61
AI as critical infrastructure for continuity in public services — Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational an…
S62
Driving Indias AI Future Growth Innovation and Impact — Need for explainability and transparency to build user confidence
S63
AI for Good – food and agriculture — Dongyu Qu: Excellencies, ladies, gentlemen, good morning. A year ago, we all gathered for the Previous AI for Good Summi…
S64
AI for agriculture Scaling Intelegence for food and climate resiliance — Maharashtra’s strategic approach represents a shift from pilot projects to population-scale implementation. The state’s …
S65
AI Meets Agriculture Building Food Security and Climate Resilien — Chief Minister Devendra Fadnavis presented Maharashtra’s Maha Agri AI Policy 2025-2029, emphasizing the shift from demon…
S66
AI for Good Impact Awards — Farmer Chat by Digital Green is described as a scalable AI platform that focuses on improving small-scale farmer livelih…
S67
Digital Public Infrastructure, Policy Harmonization, and Digital Cooperation — Marie Ndé Sene Ahouantchede explains that ECOWAS views public digital infrastructure as built on three pillars: payment …
S68
Collaborative AI Network – Strengthening Skills Research and Innovation — Garg frames AI itself as a possible digital public infrastructure that must be trusted, interoperable and shareable, dra…
S69
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Aishwarya Salvi:you you you you hello everyone, a warm welcome to you all who have joined us in this room and also to ev…
S70
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — Right. So we. Really need to think how. the AI or model which we are really developing that is applicable to the grassro…
S71
Women in the digital economy: driving the usage of digital technology among women (UNCDF) — Building the skills and providing access to resources for women remains a crucial area of focus. A young Zambian woman i…
S72
Al and Global Challenges: Ethical Development and Responsible Deployment — Donny Utoyo:and online safety vulnerability, especially for women and children. As AI rapidly transform our lives digita…
S73
High Level Leaders Session 3 | IGF 2023 — Garza advocates for reinforcing the multi-stakeholder model in internet policy and regulation. However, she notes that t…
S74
WS #51 Internet &amp; SDG’s: Aligning the IGF &amp; ITU’s Innovation Agenda — Umut Pajaro Velasquez: Okay, before to answer that, we actually have to remember that there are some core elements of …
S75
Open Forum #76 Digital for Development: UN in Action — Addressing disinformation requires collaboration between multiple stakeholders including tech companies, public official…
S76
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — The discussion acknowledged significant operational challenges including infrastructure limitations, training requiremen…
S77
Launch of the eTrade Readiness Assessment of Mongolia (UNCTAD) — However, there are still challenges that need to be addressed. Mongolia faces issues such as rural connectivity, interne…
S78
AI, Data Governance, and Innovation for Development — Sade Dada discusses the need for unique funding models to improve connectivity in rural areas. She suggests considering …
S79
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — Capacity Building and Skills Development Development | Infrastructure James George Patterson identified critical deman…
S80
(Day 6) General Debate – General Assembly, 79th session: morning session — The General Assembly debate revealed deep divisions on many global issues while also emphasizing the continued importanc…
S81
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace In conclusion, Himanshu Gupta’s work showcases the transformative …
S82
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Additionally, bioethics is highlighted as a concrete example of how a multi-level governance model can function successf…
S83
National Strategy for Artificial Intelligence — Developing the agricultural sector will be on the industry’s own terms and initiative. However, the government will prov…
S84
Taking Stock — Specifically mentioned affordability, rural connectivity, and reliability as key challenges in global south The same sp…
S85
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — Alina Ustinova: Hello, everyone. My name is Alina. I represent the Center for Global IT Cooperation, and today I want to…
S86
World Meteorological Organization — WMO recognises the potential power of Artificial Intelligence to revolutionise weather forecasts and early warnings. WMO…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Devendra Fadnavis
6 arguments92 words per minute957 words621 seconds
Argument 1
AI as a catalyst for food security, climate resilience and farmer incomes (Devendra Fadnavis)
EXPLANATION
He argues that agriculture faces mounting climate and resource challenges, and that AI can transform the sector by providing precise, timely information that enhances food security, builds climate resilience, and improves farmer incomes.
EVIDENCE
He outlines the pressures on food systems-climate volatility, falling water tables, deteriorating soil health and fragile supply chains-while noting that agriculture is a livelihood and security issue for the Global South [38-42][45-46]. He then lists AI capabilities such as hyper-local weather forecasts, pest alerts, precision irrigation, credit scoring, traceable supply chains and real-time market advisories that can address these challenges [53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of AI in enhancing food security and climate resilience is highlighted in the discussion of AI capabilities for hyper-local weather, pest alerts and precision irrigation in [S2], and reinforced by the broader analysis of AI supporting smallholder farmers in [S15].
MAJOR DISCUSSION POINT
AI’s role in strengthening food security and farmer livelihoods
AGREED WITH
Johannes Zutt, Vikas Chandra Rastogi, Dr. Soumya Swaminathan
Argument 2
Maha Agri AI 2025‑2029 policy and Mahavistar platform delivering multilingual advisory services at scale (Devendra Fadnavis)
EXPLANATION
He presents the Maha Agri AI policy as a strategic framework that uses the Mahavistar mobile platform to provide personalized, multilingual advisory services to millions of farmers, thereby scaling AI benefits across the state.
EVIDENCE
He describes Mahavistar as an AI-powered mobile platform delivering multilingual personalized advisories, market intelligence, pest alerts and access to government services, with more than 2.5 million downloads and acting as a digital friend to farmers [57]. He emphasizes that this demonstrates farmer readiness for AI and the platform’s wide reach [58-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Mahavistar’s multilingual, AI-powered advisory services are described in [S2], while the importance of AI development for non-English languages and inclusive innovation is documented in [S16].
MAJOR DISCUSSION POINT
Scaling AI advisory services through policy and technology
Argument 3
Four strategic pillars: responsible governance, open interoperable infrastructure, investment, and gender‑inclusive design (Devendra Fadnavis)
EXPLANATION
He outlines four pillars that will guide AI deployment in agriculture: responsible governance, open and interoperable digital infrastructure, investment to scale solutions, and gender‑inclusive design to ensure women farmers benefit.
EVIDENCE
He explicitly lists the four pillars-responsible governance, open interoperable digital infrastructure, investment, and gender equity-as the foundation for AI in agriculture, noting 2026 as the International Year of Women Farmers [76-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The four pillars are enumerated by the speaker in [S2]; a complementary framework of design, access and investment pillars for inclusive AI is presented in [S17].
MAJOR DISCUSSION POINT
Strategic framework for responsible AI in agriculture
Argument 4
Necessity of trusted, transparent, auditable and explainable AI to achieve scale and public confidence (Devendra Fadnavis)
EXPLANATION
He stresses that AI must be built on trusted data, governed ethically, and be transparent, auditable and explainable; otherwise large‑scale adoption will not occur.
EVIDENCE
He cites the Prime Minister’s reminder that AI must be built on trusted data, ethical governance and public accountability, warning that without trust scale will not happen [55-57]. He later reiterates responsible governance as a strategic pillar [76].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Prime Minister’s reminder on trusted data, ethical governance and public accountability appears in [S2]; international calls for AI transparency, accountability and explainability are made in [S19], [S20] and [S21].
MAJOR DISCUSSION POINT
Building trust and accountability for AI adoption
AGREED WITH
Johannes Zutt, Shankar Maruwada, Devesh Chaturvedi
Argument 5
Policy emphasis on gender equity as a core pillar and invitation to invest in women‑centric agri‑tech solutions (Devendra Fadnavis)
EXPLANATION
He highlights gender equity as an essential component of the AI agenda and calls on investors to fund solutions that specifically address the needs of women farmers.
EVIDENCE
He mentions gender equity as a mantra within the four strategic pillars and points out that 2026 is designated the International Year of Women Farmers, urging investment in women-focused agri-tech [76-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gender equity as a strategic pillar is stated in [S2]; detailed analysis of barriers and opportunities for women in digital agriculture is provided in [S22] and reinforced by the gender-equality focus of [S23].
MAJOR DISCUSSION POINT
Promoting gender‑inclusive AI investment
AGREED WITH
Dr. Soumya Swaminathan, Vikas Chandra Rastogi, Shankar Maruwada
Argument 6
Call for venture capital, impact investors, multilateral banks and corporate innovators to scale AI platforms and traceability modules (Devendra Fadnavis)
EXPLANATION
He invites a broad range of financing partners to collaborate with Maharashtra in scaling AI advisory platforms, co‑developing traceability digital public infrastructure, and supporting agri‑tech startups.
EVIDENCE
He explicitly invites venture capital funds, impact investors, multilateral development banks, corporate innovation arms and philanthropic foundations to partner in scaling AI platforms and traceability DPI modules [78-80]. He further underscores Maharashtra’s readiness to collaborate with governments, investors and researchers [85-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
An explicit invitation to VC funds, multilateral development banks and corporate innovators is recorded in [S2]; large-scale investment pledges for AI ecosystems are discussed in [S24].
MAJOR DISCUSSION POINT
Financing the scale‑up of AI solutions in agriculture
AGREED WITH
Johannes Zutt, Shankar Maruwada, Vikas Chandra Rastogi
D
Devesh Chaturvedi
2 arguments174 words per minute1183 words406 seconds
Argument 1
Government‑led integration of AI into the Agri‑Stack through Bharatvistar, consolidating weather, pest, market and scheme information (Devesh Chaturvedi)
EXPLANATION
He describes Bharatvistar as the first integrated AI‑based system that aggregates weather forecasts, crop and pest advisories, market rates and government scheme information into a single platform for farmers.
EVIDENCE
He explains that Bharatvistar provides services via an Android app and mobile telephony, delivering weather advisories, ICR-based crop advisories, pest alerts, market price information and details of government schemes on a unified platform [119-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bharatvistar’s integrated AI-based services for weather, pest alerts, market rates and scheme data are outlined in [S2]; the need for unified agritech data platforms is examined in [S27].
MAJOR DISCUSSION POINT
Unified AI platform for comprehensive farmer services
Argument 2
Creation of unique farmer IDs and a unified Agri‑Stack to eliminate “digital red‑tapism” and enable consent‑driven data exchange (Devesh Chaturvedi)
EXPLANATION
He outlines the development of a nationwide farmer ID system and a consolidated Agri‑Stack that removes fragmented applications, reduces bureaucratic friction, and allows consent‑based data sharing across schemes.
EVIDENCE
He notes that close to 9 crore farmer IDs have been created, each linked to land, crops, soil health and scheme eligibility, enabling a single-click, consent-driven access to services and eliminating the “digital red-tapism” caused by multiple siloed apps [135-140]. He further details how the ID will power tailored AI advice based on location, crop and soil data within three to six months [136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The rollout of 9-crore farmer IDs and a consent-driven Agri-Stack is described in [S2]; challenges of fragmented agricultural data and the benefits of a unified stack are discussed in [S27].
MAJOR DISCUSSION POINT
Unified farmer identity and data exchange to streamline services
AGREED WITH
Devendra Fadnavis, Shankar Maruwada, Vikas Chandra Rastogi
J
Johannes Zutt
3 arguments146 words per minute934 words381 seconds
Argument 1
Government responsibility for AI governance, digital literacy, connectivity and ensuring scientific credibility of advisory content (Johannes Zutt)
EXPLANATION
He argues that governments must set AI governance standards, ensure interoperability, promote digital literacy, improve connectivity, and guarantee that AI‑driven advisories are scientifically sound.
EVIDENCE
He lists the government’s duties in AI governance, communications, interoperability, and education, emphasizing the challenge of low literacy and limited connectivity, and stresses the need for credible, science-backed advisory content [154-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for government-led AI governance, connectivity and digital literacy is emphasized in [S18]; broader responsibilities of governments in AI governance are detailed in [S28].
MAJOR DISCUSSION POINT
State’s role in enabling trustworthy AI services
AGREED WITH
Devendra Fadnavis, Shankar Maruwada, Devesh Chaturvedi
Argument 2
Private‑sector creativity fuels diverse AI applications; “let a thousand flowers bloom” approach encourages experimentation (Johannes Zutt)
EXPLANATION
He highlights the private sector’s innovative capacity, encouraging a multitude of independent applications to emerge and be tested, allowing the best solutions to flourish.
EVIDENCE
He notes that private-sector creativity leads to many applications, advocating a “let a thousand flowers bloom” approach that lets diverse ideas be tried and see which take root [170-174].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The “let a thousand flowers bloom” mantra encouraging private-sector experimentation is quoted in [S2]; discussions on balancing innovation and regulation are present in [S29].
MAJOR DISCUSSION POINT
Encouraging private‑sector experimentation in AI for agriculture
DISAGREED WITH
Shankar Maruwada
Argument 3
World Bank’s financing, AI sandbox and truth‑testing to ensure solutions are productive and safe for farmers (Johannes Zutt)
EXPLANATION
He describes how the World Bank can provide financing, an AI sandbox for development, and a truth‑testing function to validate that AI tools are effective and safe for end‑users.
EVIDENCE
He mentions the World Bank Group’s role in financing AI applications, providing a foundational AI backbone, and helping truth-test information from various apps to ensure productivity and safety for farmers [178-180].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The World Bank’s role in financing, providing an AI backbone and truth-testing of applications is highlighted in [S2]; the same theme of multilateral support for safe AI solutions appears in the broader session summary in [S2].
MAJOR DISCUSSION POINT
Multilateral support for safe and effective AI solutions
AGREED WITH
Devendra Fadnavis, Shankar Maruwada, Vikas Chandra Rastogi
D
Dr. Soumya Swaminathan
2 arguments176 words per minute1140 words387 seconds
Argument 1
Women’s land‑ownership gaps risk exclusion; data systems must deliberately capture women’s holdings to avoid bias (Dr. Soumya Swaminathan)
EXPLANATION
She points out that most land titles are in men’s names, which means women are often invisible in data‑driven AI systems; therefore, data collection must be designed to include women’s land holdings to prevent bias.
EVIDENCE
She cites census data showing only about a quarter of properties list women (jointly or alone), leaving three-quarters excluded, and warns that a system based on publicly available data would miss those women unless their holdings are deliberately captured [219-224].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The risk of women’s exclusion due to land-title gaps is echoed in the gender-focused report on women’s digital inclusion in agriculture in [S22] and the policy brief on gender equality in [S23].
MAJOR DISCUSSION POINT
Ensuring women’s land rights are reflected in AI data systems
AGREED WITH
Devendra Fadnavis, Vikas Chandra Rastogi, Shankar Maruwada
DISAGREED WITH
Devesh Chaturvedi
Argument 2
AI solutions should reduce women’s drudgery, improve market access and be co‑designed with women farmers (Dr. Soumya Swaminathan)
EXPLANATION
She argues that AI must be tailored to alleviate the physical workload of women, enhance their market participation, and involve them directly in the design and evaluation of technologies.
EVIDENCE
She notes that AI should reduce drudgery for women, especially in tribal and remote areas where they grow millets using traditional tools, and suggests measuring success by workload reduction [225-232]. She also cites the Women Connect app-an extension of the Fisher Friendly Mobile App-that provides market and post-harvest information to fisher women and supports women-led FPOs, FPCs and SHGs [247-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recommendations for AI to lessen women’s workload and involve them in design are supported by the findings on women’s barriers to digital agriculture in [S22] and the broader discussion of AI for vulnerable communities in [S15].
MAJOR DISCUSSION POINT
Designing AI to empower women farmers and lessen their workload
AGREED WITH
Devendra Fadnavis, Vikas Chandra Rastogi, Shankar Maruwada
S
Shankar Maruwada
3 arguments134 words per minute1271 words567 seconds
Argument 1
Open, federated architecture (Maha AgEx) and open‑source standards (Sunbird, Beacon) as the backbone for population‑scale AI services (Shankar Maruwada)
EXPLANATION
He emphasizes that an open, federated architecture like Maha AgEx, built on open‑source standards such as Sunbird and Beacon, provides the interoperable foundation needed for AI services to reach millions.
EVIDENCE
He describes the need for open, interoperable systems and networks rather than siloed portals, citing open protocols like Beacon as the backbone for shared data exchange [300-304]. Earlier he references Sunbird as an open-source platform that powers large-scale systems such as Mahavistar [272-274].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The use of open protocols such as Beacon and the Sunbird open-source platform as foundational infrastructure is described in [S2].
MAJOR DISCUSSION POINT
Open standards as the infrastructure for scalable AI
AGREED WITH
Devendra Fadnavis, Devesh Chaturvedi, Vikas Chandra Rastogi
Argument 2
Design principles of open, interoperable networks and continuous model iteration to maintain trust and sustainability (Shankar Maruwada)
EXPLANATION
He outlines that AI systems should be built on open, interoperable networks and be continuously refined through iterative modeling, ensuring ongoing trust, reliability and long‑term sustainability.
EVIDENCE
He explains that the same design principles that guided DPI-open, interoperable networks, iterative model improvement, and trust-building-are applied to AI, stressing that models start minimal and improve as data and usage grow, thereby maintaining trust and sustainability [298-317].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open, interoperable networks and iterative model improvement as trust-building measures are outlined in [S2]; governance best-practices for AI model iteration are also referenced in [S28].
MAJOR DISCUSSION POINT
Iterative, open design for trustworthy AI ecosystems
AGREED WITH
Devendra Fadnavis, Johannes Zutt, Devesh Chaturvedi
Argument 3
Open‑network model enables rapid diffusion of successful private‑sector apps across states, fostering global South‑South learning (Shankar Maruwada)
EXPLANATION
He argues that a shared, open‑network (the “rails”) allows states to adopt successful private‑sector applications quickly, promoting diffusion of innovation and South‑South knowledge exchange.
EVIDENCE
He gives the example that when a private-sector app (e.g., the tomato-water-need app) proves effective, any state can adopt it through the common network, enabling rapid scaling and South-South learning [320-327].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The example of a tomato-water-need app scaling through a common open network is given in [S2]; cross-jurisdiction diffusion of AI solutions is discussed in [S29].
MAJOR DISCUSSION POINT
Leveraging open networks for cross‑state and cross‑country AI diffusion
V
Vikas Chandra Rastogi
3 arguments102 words per minute1602 words934 seconds
Argument 1
Moderator’s endorsement of the policy framework and call for collaborative implementation (Vikas Chandra Rastogi)
EXPLANATION
He thanks the chief minister, reiterates the importance of the AI policy, and calls on all stakeholders to work together to implement AI solutions in agriculture.
EVIDENCE
He thanks the chief minister for his visionary address, introduces the panel, and poses a question about central-state collaboration to ensure AI aligns with national architecture while allowing state innovation, thereby urging collaborative implementation [87-94][112-113].
MAJOR DISCUSSION POINT
Championing collaborative rollout of AI policies
Argument 2
Mahavistar’s feedback loop and data‑exchange mechanisms that feed researchers, startups and policymakers (Vikas Chandra Rastogi)
EXPLANATION
He highlights that Mahavistar incorporates a feedback mechanism that captures farmer inputs and shares data with researchers, startups and policymakers, supporting continuous improvement and innovation.
EVIDENCE
He notes that the feedback mechanism built into Mahavistar addresses inclusivity and ensures that data flows back to stakeholders for refinement [268-269], and earlier he references Maha AgEx as a platform that brings diverse data sets together for researchers and innovators [181-184].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Mahavistar’s built-in feedback mechanism for continuous data sharing with innovators is mentioned in [S2]; the importance of feedback-driven data ecosystems for agritech is highlighted in [S27].
MAJOR DISCUSSION POINT
Feedback‑driven data sharing to fuel AI innovation
AGREED WITH
Devendra Fadnavis, Devesh Chaturvedi, Shankar Maruwada
Argument 3
Collaborative projects with MSSRF to place women’s rights and nutritional security at the centre of AI deployments (Vikas Chandra Rastogi)
EXPLANATION
He announces partnership with the M. S. Swaminathan Research Foundation to integrate women’s rights and nutritional security considerations into AI‑driven agricultural initiatives.
EVIDENCE
He states that the Government of Maharashtra and MSSRF are working together on issues such as bringing women’s rights in farming to the centre, creating bio-happiness through universities, and focusing on nutritional security beyond mere food security [269-271].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The partnership focus on women’s rights and nutrition aligns with the gender-focused digital inclusion insights in [S22] and the policy emphasis on gender equality in [S23].
MAJOR DISCUSSION POINT
Integrating gender and nutrition priorities into AI agriculture projects
AGREED WITH
Devendra Fadnavis, Dr. Soumya Swaminathan, Shankar Maruwada
Agreements
Agreement Points
Open, interoperable digital infrastructure and data exchange are essential for scaling AI in agriculture
Speakers: Devendra Fadnavis, Devesh Chaturvedi, Shankar Maruwada, Vikas Chandra Rastogi
Four strategic pillars: responsible governance, open interoperable digital infrastructure, investment, and gender‑inclusive design (Devendra Fadnavis) Creation of unique farmer IDs and a unified Agri‑Stack to eliminate “digital red‑tapism” and enable consent‑driven data exchange (Devesh Chaturvedi) Open, federated architecture (Maha AgEx) and open‑source standards (Sunbird, Beacon) as the backbone for population‑scale AI services (Shankar Maruwada) Mahavistar’s feedback loop and data‑exchange mechanisms that feed researchers, startups and policymakers (Vikas Chandra Rastogi)
All speakers stress that a common, open, consent-driven data backbone – from farmer IDs and the Agri-Stack to the Maha AgEx and Mahavistar feedback loop – is the prerequisite for delivering AI services at scale and for enabling research, innovation and public-sector coordination [76-78][135-140][300-304][272-274][268-269][181-184].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with state-wide interoperable agriculture data exchange initiatives that emphasize open standards and strong data governance [S42] and with Maharashtra’s Maha Agri AI Policy calling for a federated architecture to move from pilots to ecosystem scale [S43]; broader literature also stresses interoperability as key for efficient AI-driven decision-making in agriculture [S44].
AI systems must be built on trusted, transparent, auditable and explainable foundations to achieve scale and public confidence
Speakers: Devendra Fadnavis, Johannes Zutt, Shankar Maruwada, Devesh Chaturvedi
Necessity of trusted, transparent, auditable and explainable AI to achieve scale and public confidence (Devendra Fadnavis) Government responsibility for AI governance, digital literacy, connectivity and ensuring scientific credibility of advisory content (Johannes Zutt) Design principles of open, interoperable networks and continuous model iteration to maintain trust and sustainability (Shankar Maruwada) Digital red‑tapism and the need for trustworthy data exchange across schemes (Devesh Chaturvedi)
The speakers converge on the need for AI that is trustworthy – built on reliable data, governed ethically, auditable and explainable – with the government setting standards, ensuring literacy and connectivity, and iterative open-network design to sustain confidence [55-57][76][154-166][158-162][298-317][122-129][135-140].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions on algorithmic transparency and explainability underpin this view, as highlighted by AI Security Council deliberations on transparency [S59], high-level sessions on responsible AI deployment [S60], and calls for explainability in AI systems to build user trust [S62]; the notion of trustworthy AI as critical infrastructure further reinforces the requirement for auditable and transparent foundations [S61].
Gender equity and the inclusion of women farmers are central to AI‑driven agricultural transformation
Speakers: Devendra Fadnavis, Dr. Soumya Swaminathan, Vikas Chandra Rastogi, Shankar Maruwada
Policy emphasis on gender equity as a core pillar and invitation to invest in women‑centric agri‑tech solutions (Devendra Fadnavis) Women’s land‑ownership gaps risk exclusion; data systems must deliberately capture women’s holdings to avoid bias (Dr. Soumya Swaminathan) AI solutions should reduce women’s drudgery, improve market access and be co‑designed with women farmers (Dr. Soumya Swaminathan) Collaborative projects with MSSRF to place women’s rights and nutritional security at the centre of AI deployments (Vikas Chandra Rastogi) Open‑network design must consider the most discriminated users, implicitly including women in remote areas (Shankar Maruwada)
All parties underline that AI must be gender-inclusive: policies must embed women’s equity, data collection must capture women’s land rights, AI tools should lessen women’s workload and be co-designed with them, and partnerships are being forged to embed women’s rights and nutrition into AI projects [76-78][219-224][225-232][247-254][269-271][300-304].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on gender equity mirrors commitments to Sustainable Development Goal 5 and Goal 10 on reducing inequalities [S45] and is reinforced by analyses of gender digital inequality that call for inclusive policymaking and digital skills development for women [S47]; public-access initiatives also highlight the importance of gender equity in digital transformation [S46].
Public‑private partnership and investment are essential to scale AI platforms, traceability modules and innovative agri‑tech solutions
Speakers: Devendra Fadnavis, Johannes Zutt, Shankar Maruwada, Vikas Chandra Rastogi
Call for venture capital, impact investors, multilateral banks and corporate innovators to scale AI platforms and traceability modules (Devendra Fadnavis) World Bank’s financing, AI sandbox and truth‑testing to ensure solutions are productive and safe for farmers (Johannes Zutt) Open‑network model enables rapid diffusion of successful private‑sector apps across states, fostering South‑South learning (Shankar Maruwada) Invitation to AI Impact Summit and AI for Agree Global Conference for startups to showcase solutions (Vikas Chandra Rastogi)
The consensus is that scaling AI requires coordinated financing and collaboration: governments invite VC, impact funds and multilateral banks; the World Bank offers financing and validation; open networks allow private apps to spread quickly; and conferences provide platforms for innovators to connect [78-80][178-180][320-327][181-186][187-188].
POLICY CONTEXT (KNOWLEDGE BASE)
Multistakeholder partnership models have been identified as critical for thriving AI ecosystems, with governments collaborating with technical partners and private firms to build capacity [S49]; similar PPP frameworks are described in India’s AI strategy where the state provides infrastructure while the private sector develops applications [S51]; broader sustainability initiatives also stress PPPs for scaling AI solutions [S58] and discuss public-vs-private dynamics in AI policy [S50].
AI is a catalyst for improving food security, climate resilience and farmer incomes
Speakers: Devendra Fadnavis, Johannes Zutt, Vikas Chandra Rastogi, Dr. Soumya Swaminathan
AI as a catalyst for food security, climate resilience and farmer incomes (Devendra Fadnavis) We are on the cusp of a major revolution in how support to farmers and agriculture is happening (Johannes Zutt) Using AI for Food and Climate Resilience (Vikas Chandra Rastogi) AI‑led agriculture transformation should strengthen women’s agency, knowledge access and climate resilience (Dr. Soumya Swaminathan)
All speakers agree that AI can transform agriculture by delivering hyper-local weather, pest alerts, precision inputs and market information, thereby enhancing food security, climate adaptation and farmer livelihoods [38-42][53][144-152][9-12][15-16][202-204].
POLICY CONTEXT (KNOWLEDGE BASE)
Strategic documents on AI for agriculture position AI as a driver for food and climate resilience, noting the need for interoperable data exchanges that empower rather than exploit farmers [S42]; this framing aligns with policy narratives that link AI deployment to enhanced food security and climate-adaptive farming.
Similar Viewpoints
Both emphasize that AI must be embedded within a national, open, interoperable architecture (the Agri‑Stack) to ensure alignment across central and state levels while allowing local innovation [76-78][119-124].
Speakers: Devendra Fadnavis, Devesh Chaturvedi
Four strategic pillars: responsible governance, open interoperable digital infrastructure, investment, and gender‑inclusive design (Devendra Fadnavis) Government‑led integration of AI into the Agri‑Stack through Bharatvistar, consolidating weather, pest, market and scheme information (Devesh Chaturvedi)
Both see the private sector as a source of innovative AI solutions that should be allowed to proliferate through shared open networks, enabling rapid scaling and cross‑jurisdiction learning [170-174][320-327].
Speakers: Johannes Zutt, Shankar Maruwada
Private‑sector creativity fuels diverse AI applications; “let a thousand flowers bloom” approach encourages experimentation (Johannes Zutt) Open‑network model enables rapid diffusion of successful private‑sector apps across states, fostering South‑South learning (Shankar Maruwada)
Both highlight the importance of a feedback‑driven data ecosystem that links farmer‑level data to research and service delivery, reducing fragmentation and improving service relevance [268-269][135-140].
Speakers: Vikas Chandra Rastogi, Devesh Chaturvedi
Mahavistar’s feedback loop and data‑exchange mechanisms that feed researchers, startups and policymakers (Vikas Chandra Rastogi) Creation of unique farmer IDs and a unified Agri‑Stack to eliminate “digital red‑tapism” and enable consent‑driven data exchange (Devesh Chaturvedi)
Unexpected Consensus
Strong alignment on gender equity between a policy‑focused political leader and a scientific researcher
Speakers: Devendra Fadnavis, Dr. Soumya Swaminathan
Policy emphasis on gender equity as a core pillar and invitation to invest in women‑centric agri‑tech solutions (Devendra Fadnavis) Women’s land‑ownership gaps risk exclusion; data systems must deliberately capture women’s holdings to avoid bias (Dr. Soumya Swaminathan) AI solutions should reduce women’s drudgery, improve market access and be co‑designed with women farmers (Dr. Soumya Swaminathan)
While one speaker frames gender equity as a policy and investment priority and the other as a technical and rights-based concern, both converge on the necessity of embedding women’s rights and data visibility into AI systems – an alignment that bridges political and scientific domains unexpectedly [76-78][219-224][225-232].
POLICY CONTEXT (KNOWLEDGE BASE)
The convergence of policy leadership and scientific advocacy on gender equity reflects commitments articulated in SDG-aligned statements (e.g., Cabo Verde’s stance on gender equality) [S45] and research highlighting the necessity of inclusive digital policies for women’s empowerment [S47].
Overall Assessment

The panel demonstrates a high degree of consensus: all participants agree on the need for open, interoperable digital infrastructure; trustworthy AI governance; gender‑inclusive design; public‑private financing; and AI’s transformative potential for food security and climate resilience.

Strong consensus across policy, technical, scientific and development perspectives, indicating a unified strategic direction that can facilitate coordinated action, attract investment and accelerate scalable AI deployment in agriculture.

Differences
Different Viewpoints
How to ensure women farmers are included in AI‑driven agricultural data systems
Speakers: Dr. Soumya Swaminathan, Devesh Chaturvedi
Women’s land‑ownership gaps risk exclusion; data systems must deliberately capture women’s holdings to avoid bias (Dr. Soumya Swaminathan) Creation of unique farmer IDs and a unified Agri‑Stack to eliminate “digital red‑tapism” (Devesh Chaturvedi)
Dr. Swaminathan stresses that because most land titles are in men’s names, AI systems that rely on existing public data will miss three-quarters of women farmers unless the data collection is deliberately designed to capture women’s land holdings [219-224]. Chaturvedi describes the farmer-ID system as built on existing records (land, crops, soil health) and does not address the gender gap, implying that the system will inherit the same exclusionary bias [135-140]. This creates a disagreement on whether the current ID-based approach is sufficient or whether additional gender-focused data collection is required.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on gender-focused data inclusion draw on SDG-5 commitments and analyses of gender digital inequality that stress the need for targeted data collection and inclusive policymaking for women farmers [S45][S47]; public-access literature also underscores gender equity as a core consideration in digital infrastructure design [S46].
Preferred model for scaling AI applications: private‑sector “flower‑bloom” experimentation vs. open, federated public infrastructure
Speakers: Johannes Zutt, Shankar Maruwada
Private‑sector creativity fuels diverse AI applications; “let a thousand flowers bloom” approach encourages experimentation (Johannes Zutt) Open, federated architecture (Maha AgEx) and open‑source standards (Sunbird, Beacon) are the backbone for population‑scale AI services (Shankar Maruwada)
Zutt advocates a model where many private innovators develop independent applications that are later vetted and scaled, emphasizing creativity and market-driven solutions [170-174]. Maruwada argues for a common, open-source, interoperable network that allows any successful private app to be adopted across states, emphasizing shared standards and public-good infrastructure [300-304][320-327]. While both aim for wide deployment, they differ on whether scaling should be driven primarily by private-sector experimentation or by a centrally coordinated open-network model.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy debates contrast private-sector experimental models with federated public architectures, as seen in Maharashtra’s Maha Agri AI Policy advocating a shift to a federated ecosystem [S43] and broader analyses of public-vs-private sector dynamics in AI governance [S50]; PPP models that blend both approaches are also documented [S51][S58].
Extent to which AI should replace or augment traditional extension services
Speakers: Dr. Soumya Swaminathan, Johannes Zutt
AI must be used in addition to traditional knowledge; humans must remain in the loop to avoid risks and preserve employment (Dr. Soumya Swaminathan) AI can provide rapid, science‑backed advisories, reducing the need for traditional extension mechanisms (Johannes Zutt)
Swaminathan warns that fully automated AI could displace human extension workers and stresses the importance of keeping humans in the loop, citing risks of bias and the need for employment [255-259]. Zutt focuses on AI’s ability to deliver timely, scientifically credible information directly to farmers, without explicitly addressing the role of human extension agents [152-166]. This reflects a disagreement on how much AI should supplant versus supplement existing extension services.
POLICY CONTEXT (KNOWLEDGE BASE)
The balance between AI augmentation and replacement echoes health sector discussions where AI is recommended to augment clinicians while keeping humans central to decision-making [S55]; similar concerns about human agency in automated systems are raised in AI governance literature [S52].
Unexpected Differences
Gender‑focused data inclusion vs. reliance on existing administrative data
Speakers: Dr. Soumya Swaminathan, Devesh Chaturvedi
Women’s land‑ownership gaps risk exclusion; data systems must deliberately capture women’s holdings to avoid bias (Dr. Soumya Swaminathan) Creation of unique farmer IDs and a unified Agri‑Stack to eliminate “digital red‑tapism” (Devesh Chaturvedi)
While both speakers support the broader AI agenda, Swaminathan’s explicit call for gender‑sensitive data collection was not reflected in Chaturvedi’s description of the farmer‑ID system, which assumes existing records are sufficient. This gap was not anticipated given the overall consensus on digital infrastructure, making it an unexpected point of contention.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension mirrors policy analyses that advocate for gender-specific data collection to address digital inequality versus using existing administrative datasets, as highlighted in SDG-aligned gender equity reports [S45] and studies on gender digital inequality [S47].
Human‑in‑the‑loop requirement versus AI‑centric service delivery
Speakers: Dr. Soumya Swaminathan, Johannes Zutt
AI must be used in addition to traditional knowledge; humans must remain in the loop to avoid risks (Dr. Soumya Swaminathan) AI can provide rapid, science‑backed advisories, reducing the need for traditional extension mechanisms (Johannes Zutt)
Swaminathan’s emphasis on maintaining human oversight and employment contrasts with Zutt’s portrayal of AI as a primary conduit for delivering advisory services, a tension that was not overtly highlighted elsewhere in the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources stress that human-in-the-loop should be a first-class feature rather than a token safeguard, emphasizing accountability and control in automated systems [S52][S53][S54].
Overall Assessment

The panel largely shares a vision of leveraging AI to improve food security, climate resilience, and farmer incomes, but key disagreements emerge around gender‑inclusive data design, the balance between private‑sector experimentation and open public infrastructure, and the degree to which AI should replace traditional extension services. These divergences reflect differing priorities—social equity, architectural openness, and employment preservation—within a common technical goal.

Moderate. While consensus exists on the need for AI, open data, and responsible governance, the identified disagreements could affect implementation timelines and policy design, especially concerning gender inclusion and the governance model for scaling AI solutions.

Partial Agreements
All three speakers agree that a trustworthy, open, and interoperable data infrastructure is essential for scaling AI in agriculture, but they differ on the primary mechanism: Fadnavis emphasizes policy pillars and governance, Chaturvedi focuses on a national farmer‑ID based stack, and Maruwada stresses open‑source, federated networks and shared protocols. The shared goal is a reliable data foundation, while the routes to achieve it diverge.
Speakers: Devendra Fadnavis, Devesh Chaturvedi, Shankar Maruwada
Four strategic pillars: responsible governance, open interoperable infrastructure, investment, gender‑inclusive design (Devendra Fadnavis) Creation of unique farmer IDs and a unified Agri‑Stack to eliminate “digital red‑tapism” (Devesh Chaturvedi) Open, federated architecture (Maha AgEx) and open‑source standards as backbone for AI services (Shankar Maruwada)
Both speakers concur that trustworthy AI requires strong governance and transparent data, yet Fadnavis highlights the need for auditability and public accountability as prerequisites for scale [55-57][76], whereas Zutt concentrates on the government’s role in setting standards, ensuring connectivity, and validating scientific soundness of advisories [154-166]. The agreement is on the importance of governance; the difference lies in the specific governance actions emphasized.
Speakers: Devendra Fadnavis, Johannes Zutt, World Bank (represented by Zutt)
AI must be built on trusted data, ethical governance, transparency, auditability (Devendra Fadnavis) Government responsibility for AI governance, interoperability, digital literacy, and scientific credibility (Johannes Zutt)
Takeaways
Key takeaways
AI is positioned as a strategic lever to enhance food security, climate resilience and farmer incomes in India. Maharashtra has adopted the Maha Agri AI 2025‑2029 policy, scaling the Mahavistar platform to over 2.5 million farmers with multilingual advisory services. Four strategic pillars guide the effort: responsible AI governance, open interoperable digital infrastructure, investment & scaling, and gender‑inclusive design. The central government’s Agri‑Stack (farmer IDs, unified data exchange) is being integrated with AI‑driven services such as Bharatvistar to eliminate fragmented “digital red‑tapism”. Open, federated architectures (Maha AgEx, Sunbird, Beacon) are the backbone for population‑scale AI deployment and data sharing across states, research institutions and startups. Trusted, transparent, auditable and explainable AI is essential for public confidence and large‑scale adoption. Women farmers risk exclusion due to land‑ownership and data gaps; AI solutions must be co‑designed to reduce drudgery, improve market access and embed gender safeguards. Private‑sector innovation is encouraged (“let a thousand flowers bloom”), with the World Bank and other multilateral partners offering financing, sandbox testing and truth‑checking of AI outputs. South‑South knowledge exchange is a priority; the AI 4 Agree conference will serve as a platform for global collaboration and scaling of successful use‑cases.
Resolutions and action items
Scale Mahavistar and Bharatvistar to full statewide coverage, adding support for additional regional languages (including tribal languages) within the next 3‑6 months. Operationalise the Maha AgEx data‑exchange as a consent‑driven, open federated layer for all agricultural data sets. Deploy predictive AI models (weather, pest, market) at population scale, building on the successful monsoon‑prediction pilot for 3.8 crore farmers. Incorporate women’s land‑ownership and farm‑activity data into the Agri‑Stack to avoid gender bias in AI advisories. Establish a continuous feedback loop within Mahavistar for farmer‑generated data, model validation and iterative improvement. Launch the AI 4 Agree Global Conference (22‑23 Feb 2026, Mumbai) to showcase AI use‑cases, attract venture capital, impact investors and multilateral funding. Create a joint steering committee (state, centre, academia, private sector, farmer organisations) to oversee responsible AI governance, standards adoption and auditability. Encourage private‑sector pilots and provide sandbox funding through the World Bank to test and truth‑test AI applications before wider rollout.
Unresolved issues
Concrete mechanisms for ensuring digital literacy and reliable connectivity for the most marginal, low‑asset farmers remain undefined. Specific processes for obtaining, verifying and updating women’s land‑ownership data within the farmer‑ID system were not detailed. Funding models and risk‑sharing arrangements for scaling private‑sector AI startups were discussed but not finalized. The exact regulatory framework for AI explainability, audit trails and grievance redressal at the state‑level needs further elaboration. Metrics and independent evaluation protocols for AI advisory impact (e.g., reduction in drudgery, yield gains) were mentioned but not operationalised. Details on how the open‑source standards (Sunbird, Beacon) will be governed, versioned and enforced across diverse state implementations were not resolved.
Suggested compromises
Adopt a consent‑driven data‑exchange (Maha AgEx) that balances openness for innovation with privacy protections for farmers. Allow states flexibility to innovate on local agro‑climatic contexts while aligning with the national Agri‑Stack architecture and standards. Deploy AI services initially as minimum viable models, iterating based on real‑world feedback rather than waiting for perfect solutions. Encourage private‑sector experimentation (“let a thousand flowers bloom”) while maintaining a common open‑network backbone to ensure interoperability and prevent siloed solutions.
Thought Provoking Comments
AI is not a magic. As Honorable PM said, AI must be built on trusted data, ethical governance and public accountability. Without trust, scale will not happen.
Highlights that technology alone cannot solve agricultural challenges; stresses the foundational role of trust, ethics, and governance, reframing AI from a purely technical solution to a socio‑technical system.
Set the tone for the rest of the panel, prompting other speakers (e.g., Devesh Chaturvedi and Johannes Zutt) to discuss data governance, interoperability and the need for credible, trustworthy advisory services.
Speaker: Devendra Fadnavis
We are building a statewide interoperable agriculture data exchange based on open standards and strong data governance. Data must empower farmers, not exploit them.
Introduces the concrete policy instrument (Maha AgEx) that links AI to a public‑good data infrastructure, moving the conversation from abstract benefits to a tangible implementation pathway.
Led Devesh Chaturvedi to elaborate on farmer IDs and the Agri‑Stack, and gave Shankar Maruwada a platform to discuss open‑source standards and ‘shared rails’ for AI deployment.
Speaker: Devendra Fadnavis
We started with digitalisation of services, but ended up with ‘digital red‑tapism’ – multiple apps, multiple databases, confusing farmers. The AI‑based system consolidates all advisories, schemes and market info into one platform.
Identifies a critical failure mode of digitisation (fragmentation) and positions AI as a unifying layer, thereby deepening the analysis of why earlier digital efforts fell short.
Shifted the discussion from policy rhetoric to operational challenges, prompting Johannes Zutt to talk about the need for government‑led foundations and private‑sector creativity to avoid such fragmentation.
Speaker: Devesh Chaturvedi
India is a microcosm of the world – its linguistic, climatic and cultural diversity means that solving AI for Indian farmers will generate spill‑over learnings for other developing countries.
Frames India’s scale as an opportunity for global South‑South knowledge exchange, turning the national focus into an international learning platform.
Encouraged the panel to view the AI agenda as globally relevant, leading to references to the AI4Agree conference and reinforcing the call for collaborative, cross‑border pilots.
Speaker: Johannes Zutt
Women own only a minority of land titles; algorithms fed by publicly available data will therefore exclude most women farmers. We must embed women’s data early, evaluate bias like clinical trials, and keep humans in the loop.
Brings gender equity and methodological rigor into the AI conversation, highlighting structural data gaps and the necessity of iterative, inclusive evaluation.
Created a turning point toward inclusion, prompting Vikas Rastogi to mention Mahavistar’s feedback mechanisms and Shankar Maruwada to stress designing for the most marginalized users.
Speaker: Dr. Soumya Swaminathan
The secret sauce is open, interoperable systems – think of Indian Railways as a backbone. Deploy a minimum viable AI solution, let data improve, and let the ecosystem evolve; private innovators can plug into the same ‘shared rails’.
Provides a clear architectural metaphor and a pragmatic rollout strategy (minimum viable product, open standards, iterative improvement), linking past DPI successes to future AI scaling.
Synthesised earlier points on data exchange, trust and inclusion into a concrete design principle, steering the final part of the discussion toward actionable next steps and the vision of a national AI‑enabled public infrastructure.
Speaker: Shankar Maruwada
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the conversation from high‑level optimism to concrete, inclusive, and governance‑aware implementation. Devendra Fadnavis’ emphasis on trust and data governance set the foundational lens, which Devesh Chaturvedi expanded with the ‘digital red‑tapism’ diagnosis. Johannes Zutt reframed the Indian experience as a global learning laboratory, while Dr. Soumya Swaminathan introduced gender‑focused safeguards and the need for rigorous, human‑in‑the‑loop evaluation. Finally, Shankar Maruwada’s analogy of open railways and a minimum‑viable‑AI rollout provided a unifying architectural vision. Together, these comments redirected the panel toward actionable policies, interoperable infrastructure, and equitable design, shaping the overall narrative from aspirational rhetoric to a roadmap for scalable, responsible AI in agriculture.

Follow-up Questions
How can women’s land ownership and tenancy data be systematically captured and integrated into AI advisory platforms to ensure women farmers are not excluded?
Women often lack land titles, leading to their exclusion from data‑driven services; incorporating their data is essential for equitable AI impact.
Speaker: Dr. Soumya Swaminathan
What evaluation frameworks (akin to clinical trials) are needed to assess AI tools for bias, unintended risks, and effectiveness in agriculture?
Ensuring AI recommendations are safe, unbiased, and beneficial requires rigorous testing before large‑scale rollout.
Speaker: Dr. Soumya Swaminathan
How should ‘human‑in‑the‑loop’ mechanisms be designed to combine AI advice with farmer expertise and preserve rural employment?
Balancing automation with human oversight is critical to maintain trust, contextual relevance, and job opportunities in farming communities.
Speaker: Dr. Soumya Swaminathan
What standards and protocols are required to ensure high‑quality, interoperable data for AI models across state and national agriculture systems?
Robust, shared data underpins reliable AI services; defining quality and interoperability standards is a prerequisite for scaling.
Speaker: Vikas Chandra Rastogi
What are effective low‑tech, voice‑based or feature‑phone interfaces for illiterate or low‑resource farmers, especially in diverse dialects?
Many farmers lack smartphones or literacy; designing accessible interfaces is vital for inclusive AI adoption.
Speaker: Johannes Jett, Shankar Maruwada
How can open, interoperable AI ecosystem standards be created, mirroring the Digital Public Infrastructure (DPI) model, to enable scalable AI deployments?
Standardization will prevent siloed solutions and facilitate rapid diffusion of AI tools across regions and sectors.
Speaker: Shankar Maruwada
What metrics should be used to measure AI’s impact on reducing drudgery and workload for women farmers?
Quantifying benefits to women’s labor can guide policy and ensure AI contributes to gender equity.
Speaker: Dr. Soumya Swaminathan
What integration strategies can prevent ‘digital red‑tapism’—the fragmentation of multiple apps and services—when scaling AI solutions?
Consolidating services into unified platforms is needed to avoid complexity and improve farmer uptake.
Speaker: Dr. Devesh Chaturvedi
How can predictive models (e.g., monsoon forecasts) be continuously validated and improved using real‑time farmer feedback?
Ongoing validation ensures model accuracy and builds farmer trust in AI‑driven advisories.
Speaker: Dr. Devesh Chaturvedi
What mechanisms can accelerate the diffusion of successful AI solutions from one state or country to others, creating “diffusion pathways”?
Understanding how to replicate and scale innovations globally will maximize impact and foster South‑South collaboration.
Speaker: Shankar Maruwada
How can farmer, especially women farmer, participation be institutionalized in committees that evaluate and guide AI tool development?
Direct stakeholder involvement ensures solutions address real needs and enhances legitimacy.
Speaker: Dr. Soumya Swaminathan
What financing models (venture capital, impact investment, multilateral funding) are most effective for scaling AI‑enabled agri‑tech startups?
Sustainable funding is crucial to move from pilots to large‑scale, market‑ready AI solutions.
Speaker: Devendra Fadnavis

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI for Social Good Using Technology to Create Real-World Impact

AI for Social Good Using Technology to Create Real-World Impact

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed how AI, when coupled with open digital public infrastructure, can deliver population-scale impact across education, health, and agriculture [1-3][8-11]. James Manyika highlighted AI’s rapid progress and early successes, citing AlphaFold’s protein-structure database that now serves over three million researchers, with India as the fourth-largest user [14-16]. He argued that expanding access requires coordinated digital public infrastructure and open networks-such as India’s UPI, Bhashini, and Google’s Project Vani, which provides free multilingual speech data for over 100 Indic languages [19-24][25]. Nandan Nilekani reinforced that open networks act as general-purpose platforms that let AI agents simplify transactions for users, using UPI as a model and stressing the importance of language localization [77-82][84-95]. Sang-Boo Kim described the World Bank’s AgriConnect as an open-stack, farmer-centric service that can be extended to health and education, illustrating the need for universal standards [102-108]. Kiran Mazumdar-Shaw outlined a “health stack” built on India’s consent-based data sharing, enabling AI-driven risk profiling, insurance integration, and universal healthcare, while also envisioning AI-augmented biology such as virtual cells [119-128][133-140]. Sunil Wadhwani explained that Digital Public Infrastructure supplies data pipelines and distribution channels that make AI models for TB diagnosis, treatment adherence, and rapid reading assessment scalable and low-cost, reaching millions of patients and students [170-179][190-204][208-218]. He noted growing interest from other low- and middle-income countries to adopt these solutions, with the World Bank and India working to replicate models in Africa, Brazil, and the Philippines [229-233][322-324]. The panel agreed that cheap inference is critical for mass diffusion; Nilekani gave the example of plugging improved weather models into AgriConnect to serve ten million farmers [246-254]. Manyika added that India’s infrastructure allowed Google’s Neural GCM to deliver monsoon forecasts to 38 million farmers, demonstrating the power of open networks [264-268]. In a closing lightning round, participants called for rapid scaling: Nilekani for massive diffusion to farmers, Mazumdar-Shaw for a sustainable universal health-care standard, and others for broader global dissemination [301-308][311-313][328-330]. The moderator concluded that AI’s benefits can only be realized through inclusive open networks and invited further participation via Google.org impact challenges [340-344]. Overall, the discussion underscored that coordinated, decentralized digital public infrastructure combined with multilingual AI agents is seen as the essential foundation for achieving population-scale transformation in education, health, agriculture, and beyond [19-21][36-37][41].


Keypoints

Major discussion points


Open digital public infrastructure (DPI) and interoperable networks are the essential coordination layer that lets AI turn human intent into real-world action at population scale.


James Manyika stresses that “digital public infrastructure and open networks… provide the coordination layer” [19-21]; Nandan Nilekani cites UPI as a model of an open network that enabled massive growth [77-78]; Sunil Wadhwani describes DPI as the “data pipelines” and “distribution channels” that make AI models usable in health and education [168-174].


AI-driven, multilingual agents can remove complexity for users and accelerate diffusion of technology, especially in agriculture, health and education.


Nandan explains that AI agents on open networks “remove complexity for the user” and enable inclusion for farmers and small producers [80-82]; Manyika cites the Gemini-powered agriculture pilot that gives farmers multilingual AI assistance [30-33]; Sunil details AI-based TB diagnosis from a cough and rapid reading-assessment tools for millions of children [196-204][208-214].


Language localization and data-rich “digital stacks” are critical for scaling AI services in India and beyond.


Manyika highlights Project Vani’s speech data for 100+ Indic languages [23-26]; Nandan points out that India’s multilingual agents must handle code-mixing and that initiatives like Bhashini and AI-for-Bharat are breaking language barriers [84-95]; Kiran Mazumdar-Shaw notes India’s consent-based health data stack that can be leveraged for risk profiling and insurance [122-128].


The cost of AI inference must drop dramatically; low-cost, plug-and-play models are needed for massive diffusion.


Nandan warns that “the cost of AI inference has to drop dramatically” for it to work at scale [246-249]; he gives the example of plugging Google’s improved weather model into an open AgriConnect network to reach millions of farmers [250-254]; Manyika adds that the Indian government’s infrastructure allowed the Neural GCM monsoon model to reach ~38 million farmers [264-268].


India’s open-network models are being packaged as blueprints for global replication, with the World Bank and other partners seeking standardized, scalable approaches.


Sang-Boo Kim describes AgriConnect as a “universal network” that can expand to health and education and that the World Bank is working to replicate the model in other countries [102-108][229-236]; Sunil notes growing interest from governments in the Global South to adopt India-built AI platforms [321-326].


Overall purpose / goal of the discussion


The summit convened global leaders to demonstrate how open, interoperable digital public infrastructure can serve as a “global, interoperable coordination rail” that powers AI-driven solutions for education, healthcare, agriculture and other public services, with the aim of achieving population-scale impact both in India and worldwide [1-4][7][10-12].


Overall tone


The conversation remained upbeat, collaborative and forward-looking. Speakers repeatedly expressed optimism about AI’s transformative potential, celebrated concrete successes (e.g., AlphaFold usage, TB detection, reading-assessment pilots), and used enthusiastic language (“extraordinary,” “powerful,” “holy grail”). Applause and a “lightning-round” at the end reinforced a celebratory, solution-focused atmosphere, with no noticeable shift toward criticism or pessimism throughout the session.


Speakers

James Manyika


– Role/Title: Senior Vice President at Google, leading research, labs, and technology in society; Co-chair of the UN High-Level Advisory Board on AI.


– Area of Expertise: Artificial Intelligence, AI policy, technology for society. [S8][S9][S10]


Nandan Nilekani


– Role/Title: Co-founder and Chairman of Infosys; Co-founder of Networks for Humanity; Global leader in Digital Public Infrastructure.


– Area of Expertise: Digital public infrastructure, open networks, fintech, AI diffusion. [S3][S4]


Sangbu Kim


– Role/Title: Vice President for Digital and AI at the World Bank.


– Area of Expertise: Digital economy, AI-enabled development, open standards, agriculture, health, education. [S2][S1]


Kiran Mazumdar-Shaw


– Role/Title: Chairperson, Biocon Group; Biotech entrepreneur, healthcare visionary, philanthropist.


– Area of Expertise: Biotechnology, healthcare innovation, AI in health, universal health care. [S14]


Sunil Wadhwani


– Role/Title: Co-founder of the Wadhwani Institute for Artificial Intelligence; Visionary entrepreneur and philanthropist.


– Area of Expertise: AI for social impact, digital public infrastructure, health and education solutions. [S7]


Moderator


– Role/Title: Moderator of the India AI Impact Summit (referred to as “Ashwani” in the opening).


– Area of Expertise: Event facilitation, AI policy discussion.


Additional speakers:


Demis (referenced by Nandan Nilekani, likely Demis Hassabis) – mentioned in discussion about weather models; no explicit role or title provided.


Ashwani – addressed by James Manyika (“Thank you, Ashwani”) and appears to be the moderator’s first name; no further details.


Full session reportComprehensive analysis and detailed insights

The moderator opened the India AI Impact Summit by stating that real-world AI impact depends on “population-scale” transformation of education, health-care and agriculture, and that such impact can only be achieved through a built-in coordination layer [1-3]. To frame the discussion, James Manyika – Google’s senior vice-president for research, labs and technology in society and former co-chair of the UN High-Level Advisory Board on AI – was introduced as the first speaker [4-7].


Manyika began by affirming Google’s belief that universal access to AI is essential for expanding innovation capacity worldwide [10-11]. He highlighted the rapid technical progress of AI, citing AlphaFold’s breakthrough in solving the 50-year protein-structure problem and the open AlphaFold database now used by more than three million researchers in 190 countries, with India ranking fourth in adoption [14-16]. He argued that to “fully take advantage of this potential” access must be expanded from the outset, warning that the “digital divide must not become an AI divide” [18-19]. According to Manyika, digital public infrastructure (DPI) and open networks provide the coordination layer that translates human intent into real-world action [20-21]; India’s UPI payments system and the Bhashini language-network exemplify this [21-22]. Google’s partnership with the Indian Institute of Science on Project Vani, now in its second phase, has released speech data for over 100 Indic languages – including 20 languages previously undocumented – as a free resource [23-26]. He noted a recent $10 million Google.org grant to the Networks for Humanity Foundation, which is building universal tools such as asset-tokenisation and open-network standards across innovation labs from Singapore to Switzerland [38-41]. Concrete AI-enabled interventions were described: multilingual agents for 1.4 million frontline health workers, AI-driven pest-surveillance for national crops, and an education platform already reaching ten million learners with a target of 75 million students and two million educators by 2027 [43-48]. Manyika illustrated the power of agents with an energy-trading example, showing how a farmer with rooftop solar can sell excess power through a simple AI-driven commerce interface, eliminating the need to understand complex market mechanisms [84-86].


Following Manyika, the panel was introduced: Nandan Nilekani (co-founder and chairman of Infosys and co-founder of Networks for Humanity), Sang-Boo Kim (World Bank vice-president for digital and AI), Kiran Mazumdar-Shaw (chairperson of Biocon Group) and Sunil Wadhwani (co-founder of the Wadhwani Institute for AI) [52-67].


Nilekani framed AI as a “general-purpose technology” whose fastest diffusion requires open networks that allow many innovators to build applications at the edge [73-80]. He used India’s Unified Payments Interface (UPI) as a prototype of an open architecture that grew into the world’s largest payments system [77-78] and argued that AI agents on such networks “remove complexity for the user”, enabling inclusion for farmers or small-scale electricity producers in their own language [81-82]. He stressed that multilingual capability is the “holy grail” of diffusion, noting initiatives such as the government’s Bhashini, AI-for-Bharat and Google’s Project Vani that address code-mixing (Hindi-English-Tamil) and aim to eradicate language barriers [84-95]. He warned that if a single AI query costs even 500 rupees, the model would be unaffordable for mass-market use, underscoring the need to drive inference costs down dramatically [242-244].


Kim described the World Bank’s AgriConnect as an “open-stack, farmer-oriented” platform that delivers coherent services through an open network [102-106]. He positioned the shift from supplier-centric to user-centric services as a hallmark of the AI era and suggested that the same open-standard architecture could be extended to health and education, becoming a “universal network” for the future [107-108]. Kim likened the World Bank’s role to that of a sommelier, curating and recommending the most suitable “wine” (i.e., AI-enabled solutions) for each country’s needs [312-315].


Mazumdar-Shaw outlined a vision of a national “health stack” built on consent-based, secure data sharing analogous to UPI [122-124]. She explained that India is aggregating phenotypic, genomic, demographic, radiological and treatment-outcome data, which can be risk-profiled at population scale and linked to insurance products – a task that only AI can perform efficiently [125-128]. She also highlighted the potential of AI-augmented biology: learning from the energy-efficient, distributed computation of cells, re-programming cells, and creating virtual cell models to move from hospital-centric to preventive, community-centric care [133-140]. She warned that data-sharing reluctance outside India remains a barrier [288-292].


Wadhwani explained that DPI supplies two essential functions for AI in the public sector: (i) data pipelines that feed training models, and (ii) distribution channels that deliver inference at scale [170-174]. He illustrated this with a TB-control programme that uses the NIXA patient-management database to (a) diagnose TB from a cough sound on a smartphone, raising case detection by 25 % nationally [196-200]; (b) automate lab results for same-day reporting [202-204]; and (c) predict treatment non-adherence to focus the work of 2 000 caseworkers [204-206]. In education, a 20-second speech-based reading assessment costing five paise per child has been piloted for three million students and is being mandated for eight million more, with plans to reach 75 million by next year via the Rakshak DPI platform [208-218]. He noted growing interest from governments in the Global South to adopt these AI platforms, citing recent engagements with African and Latin-American ministries [321-326].


The conversation then turned to the economics of scaling. Nilekani warned that “the cost of AI inference has to drop dramatically” because per-query fees of even a few rupees would block mass adoption [246-248]. He illustrated how an open network enables plug-and-play of improved models: integrating Google’s latest weather forecasts into AgriConnect would instantly benefit ten million farmers [250-254]. Manyika added that India’s infrastructure allowed Google’s Neural GCM monsoon model to reach roughly 38 million farmers, demonstrating the power of a ready-made coordination rail [260-263].


Across the panel there was strong consensus that open, interoperable DPI is the foundational coordination layer enabling AI to convert intent into action at population scale [1-3][19-21][73-80][170-174]. Speakers agreed that multilingual AI agents are essential for inclusive diffusion [80-82][83][94-95]; that inference costs must be driven down to enable billions of daily queries [246-248][252-254]; and that open standards and open networks together allow new models to be “plugged in” without rebuilding the whole stack [105-106][252-254]. The panel also concurred that India’s open-network blueprints – from UPI to AgriConnect, TB diagnostics and reading-assessment tools – constitute replicable models for other low- and middle-income countries [102-108][229-236][321-326].


While Nilekani highlighted cheap inference as the primary bottleneck, Wadhwani emphasized the equally critical role of DPI-provided data pipelines and distribution channels; together these perspectives outline the full stack of requirements for population-scale AI deployment [246-248][170-174]. A secondary nuance concerned emphasis on “open standards” (Kim) versus “open networks” (Manyika) as the key technical foundation for user-centric services [105-106][19-21].


In the closing “lightning round”, Nilekani called for massive diffusion of AI applications to millions of farmers worldwide [301-308]; Mazumdar-Shaw urged the creation of a sustainable, universal health-care standard built on AI-driven risk profiling [311-313]; Kim pledged to disseminate successful use-cases globally so that developing-world actors can grasp AI’s potential [328-330]; and Wadhwani reaffirmed India’s role as a model for the world, noting the Prime Minister’s call to “develop in India, deliver to the world” [317-326].


The moderator wrapped up by reiterating that AI’s true benefits “can only be realised when we build for everyone using open networks” and invited participants to apply for the Google.org Impact Challenges (AI for Science and Government Innovation) and to visit Google’s exhibition booths [340-345]. The session concluded with applause and a group photograph [346-358].


Key take-aways – Open DPI and interoperable networks constitute the essential coordination rail that lets AI translate intent into real-world outcomes; AI-powered multilingual agents remove language and complexity barriers; inference costs must fall dramatically to achieve billions of daily queries; consent-based, open health data stacks enable risk profiling, insurance integration and preventive care; India’s sector-specific pilots (AlphaFold, UPI, Bhashini, Project Vani, AgriConnect, TB cough-analysis, reading-assessment) provide a scalable blueprint for the Global South; and the convergence of biological intelligence with AI promises future breakthroughs in regenerative medicine.


Overall, the discussion was upbeat and collaborative, with senior leaders from Google, the private sector, the World Bank and academia aligning on a shared vision of inclusive, open-network-enabled AI for population-scale transformation, while recognising the need for continued work on inference economics, global data-sharing norms and standard-setting for cross-border replication.


Session transcriptComplete transcript of the session
Moderator

Because we believe that AI’s true potential lies in its ability to deliver population -scale impact, transforming education, healthcare, and agriculture for every citizen. However, that impact can only be possible when there’s coordination that’s built into the system. And therefore, today, we are here joined by global leaders to explore how open networks and digital public infrastructure can create a global, interoperable coordination rail, powered by AI to translate intent into action across borders. To set the stage, it’s my honor to introduce James Manika. James is the Senior Vice President at Google, leading research, labs, and technology in society. He also served as the co -chair of the UN’s High -Level Advisory Board on AI. James, welcome. The floor is yours to set the stage.

James Manyika

Thank you, Ashwani. Good morning, everyone. It’s a real pleasure and privilege to be back in India and to join all of you here at the India AI Impact Summit. At Google, we believe that access to AI is essential for unlocking opportunities and expanding the innovation capacity for people everywhere. The rapid technological progress that we’re seeing in AI’s development is really quite breathtaking and represents an extraordinary opportunity to solve problems and empower people, power economies, advance science, and tackle some of society’s greatest challenges. Indeed, we’re beginning to see the impact of this, so it’s not just in the future, but we’re already starting to see some of these benefits. benefits and impacts materialize today. Take science, for example.

Five years ago, our AlphaFold system, which is our Nobel Prize winning innovation, solved the 50 -year grand challenge of protein structure prediction. And since then, the freely available AlphaFold protein database has been used by more than 3 million researchers in over 190 countries. And in fact, India is actually the fourth largest adopter and user of the protein database, where people are working on a variety of problems, everything from neglected diseases all the way to even breeding resistance, soya beans, and a whole range of things that are incredibly beneficial to people in India and beyond. But to take full advantage of this potential, we need to collectively expand access right from the beginning. As you may have heard our CEO Sundar Pichai say yesterday, we need to ensure that the digital divide does not and AI divide.

Digital public infrastructure and open networks are an important part of making this possible. They provide the coordination layer that allows AI to translate human intent into real -world action. And India has been leading the way with systems like UPI and Bashini network and infrastructure, bringing the capabilities of AI into the daily lives of people across the country and at population scale. At Google, we’ve been a very committed partner in this journey by helping to build the foundations that help to scale it. For instance, our collaboration with the Indian Institute of Science, and in particular on Project Vani, has now completed its second phase, where we’ve been covering every Indian state, making speech data for over 100 Indic languages available for free.

And we’ve been able to do this through the government of India’s Bashini mission. In fact, this includes 20 languages that had never been spoken before. been recorded before digitally that we’re now building onto these systems in ways that truly try to attempt to reflect India’s true linguistic and cultural richness and diversity. And we continue to build on our commitment to drive scaled impact at the grassroots level. This commitment to scaled impact is reflected in our recent partnership with the World Bank, and I’m sure we’ll talk about this later today. Together we’re taking a blueprint born right here in India and scaling it by localizing it across the globe to countries from Brazil to Nigeria, Ethiopia, and Kenya.

And the heart of this blueprint began with our partnership with the government of Uttar Pradesh. There we piloted a Gemini -powered open network for agriculture that provides farmers with multilingual AI agents to facilitate everything from credit to crop prediction. By taking the lessons we learned in Uttar Pradesh, where digital tools drove real… measurable impact. We’re proving that a small holder farmer can compete and execute on the value that they create rather than the platforms that they’re on. This isn’t just a regional success. It’s now a global architecture and a model that can be taken everywhere for global digital inclusion. The success of these networks depend on a single fundamental principle. It must remain decentralized and open.

This is the driving force behind our support for the Networks for Humanity Foundation. Again, one of the things we’ll talk about this morning. And through a $10 million Google .org grant that we announced last year, the Network for Humanity Foundation is building the universal tools for tomorrow from the FinInternet, for asset tokenization, to the BEK and open networks. And by establishing innovation labs from Singapore to Switzerland, they’re ensuring that the that the infrastructure of opportunity is a global standard and not just a local exception. Having this type of infrastructure in place is what will allow all of us to collectively achieve population scale change. That’s why we’re supporting change makers like Wadwani AI through Google .org grants that try to embed intelligence directly into the digital rails for millions of Indians to be able to use.

In healthcare, for example, this means empowering something like 1 .4 million frontline workers with multilingual AI assistance, providing early warnings to combat child malnutrition across the country. In agriculture, it means integrating AI into the national pest surveillance system to protect India’s most important crops at a national scale. And in education, it means integrating AI into the national pest surveillance system to protect India’s most important crops at a national scale. And in education, it means delivering high -quality learning experiences through AI -led transformation of government government -owned education and development platforms. And this is an initiative that’s already reached 10 million students and educators with the goal of empowering as many as 75 million students and nearly 2 million educators by the end of 2027.

Ultimately, to fully capture AI’s beneficial potential, we must be bold and responsible and be committed to building all of this together. We must pursue AI’s most ambitious possibilities while ensuring that we build the coordination layer necessary to bridge and close the AI divide. With that, it is now my great pleasure and honor to welcome an extraordinary group of incredible leaders and innovators to the stage. We’ve been doing this for an extraordinarily long time with incredible impact. First, I’d like to invite Nandan Nilikani. Nandan is the…

Nandan Nilekani

Thank you.

James Manyika

Nandod is the co -founder and chairman of Infosys. He’s a global leader in digital public infrastructure and the co -founder of Networks for Humanity, an initiative building open, interoperable digital infrastructure for the intelligence age. I should say I’ve known Nandod for a very long time. When he first told me what he was working on 15 years ago, I’m not quite sure I quite believed him, but here we are. Next, joining us is Sang -Boo Kim. Sang -Boo is the World Bank’s vice president. Sang -Boo is the World Bank’s vice president for digital and AI, leading efforts to drive digital economy growth in developing countries by strengthening infrastructure, cybersecurity, data privacy, while modernizing government services and also touching many areas like health, education, and more.

Our third guest… is Kiran Mamzouma -Shaw. As chairperson of Biocon Group, Kiran is a pioneering biotech… Kiran is a pioneering biotech entrepreneur, health care visionary, and a passionate philanthropist committed to expanding access to health care through affordable innovation. And finally, please welcome Sunil Wadhwani. Sunil is a visionary entrepreneur and philanthropist who co -founded the Wadhwani Institute for Artificial Intelligence to drive systematic and systemic social transformation through AI solutions and innovation in the public systems across health care, education, and agriculture. So we’re now going to have a conversation. I can’t wait to do a conversation with these extraordinary leaders. Thank you. Nanda, let me start with you. You’ve been championing digital decentralized ecosystems for a very long time, building open networks, taking things to extraordinary scale in India, and recently with Bakken and Finantech.

And obviously you bring a lot of credibility to both users of these systems and to regulatory bodies. How do you see AI as a multiplier or a factor as you think about open networks and the kind of transformational change you’ve been pursuing?

Nandan Nilekani

No, I think AI is very fundamental, and I’ll explain how open networks and AI come together. I think what some of us have been thinking about is if AI is a general purpose technology, what is the fastest way of diffusing the use of AI in a productive way for people? And, you know, ultimately all this, it only makes sense if you can do it. people’s lives improve. And I think we have a lot of experience with open networks. I mean, in some sense, UPI was an open network for payments, and the open architecture led to the massive growth and became the world’s largest payment system. So a lot of those principles are embedded in Beckon, and we have other examples.

But I think open networks allows many actors, many innovators to build applications on the edge using AI. And I think we keep talking about agents, but I think the real power of agents is in removing complexity for the user. So if a user is there who is a farmer or somebody who is producing a little bit of electricity, if they can very easily transact with somebody else through an agent, which is in their own language, then suddenly this is inclusion at massive scale. So I see really AI… agents on an open network as the fundamental construct for massive diffusion of technology.

James Manyika

And also the importance, as you mentioned, of doing that in languages, in local languages.

Nandan Nilekani

Oh, totally. I think you talked about what you’re doing at ISE. I think there are many initiatives in India which essentially are driving to make language completely accessible. Because language is not just pure language. I mean, the way Indians speak, they mix the English, Hindi, and Tamil in one sentence. So how do you deal with that? How do you recognize that? So I think all that is getting addressed. There are many initiatives, voice AI, there’s Bhashini of the government, there’s AI for Bharat, there’s the Google project. So I think there’s lots of stuff. But fundamentally, I think language as a barrier will go away. So if you combine language, so a person talks to the agent in their own language, and then the agent does some transaction with hiding all the complexity behind it, then, you know, that’s the holy grail.

We can get everybody on the system, and that’s how AI will get diffused.

James Manyika

And then speaking of, you mentioned farmers and agriculture. Sangbo, let me come to you. I mean, the World Bank recently launched AgriConnect initiative. First of all, I’d like you to describe that a little bit. I think it’s intended to make what smallholder farmers do much, much more efficient and scalable. But I’m curious, what has that work taught you so far about the type of global standards that are going to be needed to scale local solutions?

Sangbu Kim

So, if you just go to look about the AgriConnect in Uttar Pradesh for now, so it is a very farmer -oriented approach to provide very coherent and consistent services at the same time with open stack, open network. That means, if you think about the previous day, from the… computer innovation, mobile innovation, now we are seeing AI innovation. I would interpret this evolution from the supplier -oriented service environment to the customer, user -oriented environment. In that sense, some open standard and open network is a really crucial part to make sure user -centric service. So it is very efficient and affordable solutions for an AI era to fully provide quality of service to the user. In that sense, this AgriConnect project is really important, but it is not only for agriculture project.

With that, we are really looking forward to expanding to another sector like healthcare and education. So it can be a very… universal network in the future.

James Manyika

That’s pretty powerful. In fact, speaking of health care, you mentioned you’re taking this to health care. Kieran, you’ve been an incredible advocate and innovator when it comes to thinking about medicine as a whole. And you’ve talked about this idea that we need to move beyond the industry of medicine. And tell me, say more about what you have in mind about what we need to move to, and in particular, how you can connect what’s going on maybe with AI and data sets with fundamentally transforming medicine.

Kiran Mazumdar-Shaw

So I think I have to answer this in two parts. The first part is how do we basically leverage what Nandan refers to as the digital stack to a health stack. That is the first big opportunity we have. And I think India is a country that can uniquely create a global reference model when it comes to… The use of AI in the kind of… health data that we are collecting. So, for instance, India is beginning to collect a lot of health data in its health stack, and it’s phenotypic, it’s genomic, it’s demographic, and radiological data, and, of course, treatment and treatment outcome data. Now, when you start collecting this data, I think the whole objective, again, which is a holy grail, which is universal healthcare delivery at scale in a sustainable way, and how do we reduce the disease burden, I mean, and increase lifespan, all these are big challenges and a very complex set of solutions.

But I think this is a starting point where you get this huge digital stack of health data. And because India has this open source and the consent -based kind of secure data share, already established in UPI. I think we should quickly apply this to healthcare. And when you do that, you will start risk profiling your population at a demographic level, which I think is very exciting and at scale. And if you can integrate insurance into that, that will be even more powerful. That can only be done by AI. So AI has the opportunity to risk profile very fast, to try and find interesting insurance models, to see how we can marry the risk profile with the insurance instrument.

Not easy, but I think it’s a good challenge because AI can be given a lot of exclusion -inclusion criteria, which it can adopt. So I personally am very excited with what AI can do for health, digital health, and the whole universal healthcare delivery model. Additionally, of course, India has this unique model of ASHA work, because if ASHA work is not done, can be empowered with AI that is even more powerful. So I think you know deploying AI for the common people, the common man is very important to both Nandan and Samguth’s point of view. Now coming to my excitement about your second question about what is it that I’m looking beyond this. Beyond this I’m looking at advancing medicine using AI.

Now biology on its own was limited because it didn’t have the kind of power of technology to get deeper insights. AI like what you’ve just done, alpha fold, alpha genome is going to give it immeasurable opportunities to understand biology and to me biological intelligence is just amazing. And if you combine it with artificial intelligence and convert bring that convergence I think we are in for huge And when I look at biological intelligence, when I just look at cell biology, how cells signal, how cells create circuits, how cells regulate, how cells connect and disconnect. I mean, the human body, living systems, have distributed data centers. And these data centers are connecting, disconnecting with sips of energy, not gigawatts of energy.

And they’re actually translating that into instant information and decision making. If we can learn that and apply it to AI, I think it’s going to be transformational. I am really looking forward to reprogramming cells, right? That’s the holy grail. How do you convert a cancer cell into a non -malignant cell? How do you basically look at regenerative science? How do you look at lifespan? I mean, your biggest question today. Right. How do we shift from… from hospital -centric care to primary and community care. That can happen with AI, with predictive and preventive medicine. That’s, I think I’ve said enough. Yeah.

James Manyika

Well, it would actually take us, in effect, Kiran, it would probably take us from kind of treating diseases to preventing diseases. And I like it. You and I were talking earlier. You and I and Demis were talking about this idea of someday we should be able to try to build virtual cells, models of virtual cells, and be able to do kind of cell -based biology, basically.

Kiran Mazumdar-Shaw

Absolutely. That’s one of the exciting things. It’s very exciting. Yeah.

James Manyika

We’ll come back to that. But I want to come to you, Sunil, which is, you know, I’m curious, Sunil, as you think about what you’ve been doing, what role does DPI and open networks pay in developing and scaling the kinds of solutions to some of society’s most pressing problems? I mean, you’ve been thinking about this for a very long time. I mean, from way back. When you set up the AI. Institutes, way before most people were thinking about these things. But I want to hear what your perspectives and experiences have been.

Sunil Wadhwani

Thanks, James. Good morning. Just so we’re all clear, there’s a lot of intellectual horsepower on the stage, and it’s all on this side of me. I’m basically here for my good looks, so just so we manage expectations. But when I set up Vadbani AI back in 2018, and Prime Minister was good enough to come and inaugurate that, basically we had a huge benefit, which is the following, to your point, that over the last 20 years, the government of India has developed a set of DPI, Digital Public Infrastructure, that is broad and that is deep. And this DPI are basically digital building blocks that are going to be used to build a system of infrastructure that connect policy, program implementation, public service workers, and citizens.

in the country. And I’ll give you a couple of examples. But these, again, these digital public, this DPI provides two key, very practical, down -to -earth functions and benefits, as I see it. Number one, they provide data and data pipelines. And for AI, you couldn’t build AI for the social sector without the kind of data and the data pipelines that they provide. Secondly, this DPI provides distribution channels so that once your inference models are ready, these AI platforms, again, developed and managed by government, provide a distribution channel to get our AI models out at scale. Without these, trust me, the usage of any model in the public sector, in the social sector, would cost incredibly more and wouldn’t scale anywhere near what we see.

So, two quick examples. One in healthcare, one in… education. So one of the challenges, one of the national health priorities for the government of India for the last several years has been the elimination of tuberculosis, TB. It’s the largest infectious disease killer in the world, kills close to 2 million people a year. It’s the largest infectious disease killer in India, kills close to half a million people a year in India. And for each person that unfortunately dies, 20 other survivors live miserable lives and it impacts their life, their ability to earn a living. So the government asked us to come in and see what we could do. And we identified with the government what are the three or four key pain points in the patient’s journey.

First one is diagnosis and diagnosing TB in economically vulnerable communities isn’t easy. X -ray machines, sputum analysis, etc. These are all challenging, they’re expensive, they’re time -consuming, they’re tedious. So that’s challenge number one. Secondly, if you do sputum analysis, these have to go to 64 government labs around the country. There’s throughput time, and by the time the patient gets the results back, you’ve lost some time in initiating treatment for the people who have TB. And finally, there’s a huge problem that there’s a subset of TB patients who stop taking their medication because it’s a very toxic regimen of medications, which has a very toxic effect on the body. The problem is once you stop taking them, you develop drug -resistant TB, then the mortality rate goes up dramatically to 50%, and then you infect a lot more people and so on.

So fortunately, the government has a DPI called Nixia. It’s a very large data platform. It’s a patient management system that has data on all of the TB. They detected TB cases in the country. Government gave us access to that database. We developed a range of models. And… And to address these challenges that we saw in the patient care journey, for diagnosis, we’ve come up with a way of diagnosing TB from the sound of a cough into a smartphone. It’s instant. It’s quick. It shows you what the risk is of that patient having TB, and so government workers can focus on those patients. In the one year or so that this program has started getting rolled out, the diagnosis part using the sound of a cough, detection of TB patients has gone up by 25 % nationally.

You may think that’s bad news because now there’s more TB cases, but they were there. But now we can make sure they get treated. On the labs and the turnaround time, we’ve come up with an AI way to automate a lot of this testing, so now literally you get the results the same day. Patient and the doctor finds out instantaneously, and then they can start treatment. On the issue of patients who develop drug -resistant TB, we’ve come up with algorithms that predict. predict which TB patients are likely to fall off their medication regimen. And so the 2 ,000 or so TB caseworkers in the country, which is a small number for the 4 or 5 million TB patients that we have, but now they can target their time and bandwidth on their subset of patients that really needs help.

But all of that was enabled by this DPI called NICSHA, this database, which enables all of this. One other quick example in the education space. In the global south, including in India, there’s a very high dropout rate of students in early grades, grades 1 to 5. So we got a call from a large state government in India about a year back saying, can you help? We took a look at it, and it turns out that the single key reason for this very high dropout rate in early grades, grades 1 through 5, is the inability of these young children to read proficiently in that environment. And so we’re going to have to do a lot of research. if you can’t read properly it will affect how you do in all your subjects geography, history, science, everything you start failing you get frustrated, your parents say come back home, work on the farm work in the kitchen, etc.

and that affects the rest of their life so we’ve come up with a system to diagnose within 20 seconds for each child by just speaking into a phone into our model in 20 seconds we figure out exactly where they are struggling what words what phrases, what sentences and what will help them get on the right track and this is being done at the cost of 5 paise per student I think cost is another very big part of scaling that doesn’t get discussed too much but cost is very important so 5 paise per student while this was in pilot the suite of solutions we came up with so we’ve got a way like I said of assessing in 20 seconds where the patient where the student is struggling where the patient is struggling We come up with a diagnostic and then come up with a remediation plan and exercises for each student to practice at home to improve their reading.

The state was so impressed with the pilot, they made it mandatory for all 3 million kids of that age in school. Three or four other states, including the state of Rajasthan, just made it mandatory for all 8 million kids over there. And by the way, all of this, again, enabled by DPI. So Rajasthan has a state -level DPI called Rakshak. Our system, our models sit on top of that system. It reaches 400 ,000 schools, 8 million students, and it’s spreading. So now the government of India, by the end of next year, wants to make this standard across the country. All 75 million children of that age group will get their reading improved and strengthened through the systems that we have. Bottom line, all enabled by DPI.

James Manyika

No, I mean, those are very… APPLAUSE Thank you. Those are incredibly powerful examples. In fact, the case of TB is actually one that’s super important because something like 40 % of people in the world go undiagnosed with TB. And, in fact, most of them are in the global south. But it also just brings me back to maybe a question of scale. I mean, in what you’re doing, you mentioned a few countries, but I think your goal is to get with some of these education and health solutions to, like, 25 countries or more. How are you thinking about kind of taking that to kind of multi -country scale? And what are some of the ideas you have about how you do that?

Sangbu Kim

achieve the same you know academic goal within six week which usually takes more longer than a year long process so that’s one course one example in Nigeria also not only the TV some very small handheld ultrasound device can scan the the pregnant woman and then easily diagnosis some some problem for a baby and then it drastically reduced the birth that the baby death rate so it is one another example how can we scale this up another good example as you said we are expanding the current India model to other three African countries and in Brazil we added one more in Philippine okay and then the one of the way is that you how can you find some very standardized and scalable model, but this is not easy.

But from the one concrete example, like in India case for Agricultural Connect, we are figuring out what would be the best way and lighter model we can quickly replicate to other countries. This is our role. The World Bank is trying very hard to figure out what that means and then how they can really replicate this model to other countries. And what would be the really critical component to be replicated. So this is our role. So we are just working on what would be the best simple model is. So in that sense, usually I’m not sure it is really right analogy. I’m using this analogy as a sommelier. We are not innovation creator. There’s a bunch of really good wine producers, but I would say that this is a very good model.

customers is not really aware of which wine is really fit for their taste. So as a sommelier, the World Bank is trying very hard to understand the wines and then find some better, recommend better wines for our customers’ taste.

James Manyika

Yeah, I think on this question of scale, I mean, those are great ways to… Do you need any help on quality control with these wines? Yeah, but I think on this question of scale, I mean, Nandan, I’ve heard you say that, you know, we won’t get to true population scale unless we actually scale things like inference in AI and how we do that at massive scale. And I’m just curious for you to expand on that a bit more, but also what lessons and implications it might even have for people like us who are building frontier models. Say more about why the inference part of this really matters.

Nandan Nilekani

No, I think broadly speaking, I think, especially in the global sub… the cost of AI inference has to drop dramatically because if you’re serving a customer with one query and that costs, you know, 500 rupees or something, it’s not going to work. So we have to make it really, inference has to be, which I think, you know, you’ll do that because, I mean, there’s a lot of focus today on the training side, you know, getting bigger and bigger models and launching them. But I think as that sort of stabilizes, I think the focus will shift on the inference side to make inference cheap. But I think the, I’ll give you an example of, a very tangible example of open networks, even AgriConnect.

Yesterday I was talking to Demis. And Demis was saying that Google is improving its weather models. Yeah. They’re making it better, more efficient, more predictive, more granular in where it, you know, area by area and so on. Now, if you had an open network, if you had a network for Agri, like AgriConnect, which suppose it has millions of… then all we need to do is just plug in the latest weather model of Google into that open network, and suddenly 10 million farmers have access to the latest weather data. That’s a good example of why this open network thing is important, because it allows you to plug new models, new sources of capability, new ideas, and so on.

And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very interesting demo here of energy trading. Now, we never thought of energy as something that you traded because you bought it from the utility. But today, with millions of people producing energy, they’re able to – somebody has a rooftop solar, there’s some extra energy they can sell it to somebody else. But how does a farmer in UP learn how to sell energy? It’s a whole new concept. It’s only by an agency. It’s a classic commerce interface that is simple. So I think low -cost inference combined with – with agents that hide complexity is the key to massive diffusion.

James Manyika

Yeah, and in fact, I like the example you brought up on weather because in some ways, thanks to the, quite frankly, the forward thinking of part of the Indian government and the Ministry of Agriculture, they’ve set up that infrastructure. And in fact, last year, they used one of our models, Neural GCM, which predicts monsoons. And we’re actually able to deliver, I think, to something like 38 million Indian farmers predictions about the monsoons. But that only worked because the Indian government actually set up that kind of infrastructure where you could plug in these models. Kiran, I want to come back to you because, I mean, in some ways, you raise some more foundational questions here about the future of biology and health overall.

And I’ve heard you say, for example, that, in fact, AI doesn’t replace biology, that biology is much, much more fundamental and foundational. Say more about that. that and what can AI learn from biology and vice versa and what do you imagine what needs to happen to fully take advantage of?

Kiran Mazumdar-Shaw

Yeah so I think first and foremost biology works through distributed data centers okay and when it wants to build intelligence retrieve memory and inference data it does so with sips of energy not with gigawatts of power that our data centers. So we could learn something from biology. So I think we can learn something from that. I think more fundamental to that is that biology also has generational learning you know if you think about how our DNA stores generational memory I think that’s fascinating how does the Arctic turn fly out of its nest for the first time travel 70 ,000 kilometers to the Antioch and then travel to the Arctic and then travel to the Arctic and then back it has navigational intent embedded in its DNA.

How does that work? So I think we have to learn a lot from biology and use AI to learn that biology. Because without AI, you cannot have any insights into biology. So I just feel that the future is going to be about the convergence of biological intelligence and AI. And that is going to be a very powerful transformative process. Because biology has a lot to teach AI in terms of how to do it with less energy, how to do it rapidly, and how to multiplex multimodal data very rapidly. Now that is something which I think is very exciting. And I think to go back to what you’ve just been discussing with Nandan and others, I think what makes it very exciting right now is the volumes of data you can collect.

And I think that is the reason why we are so excited to be here. And I think that is the reason why we are so excited to be here. And I think that is the reason why we are so excited to be here. Thank you. on open networks. I think we have to talk about, I mean, I know I work with a lot of, you know, organizations around the world in my field. There’s a huge reluctance to share data. You know, there’s a lot of wariness about IP being, you know, fragmented. And therefore, I think, except for India, there’s a lot of this resistance to share data. Now, when you don’t share data, you’re going to silo it.

India has this unique opportunity because of its, you know, this open networks and the public digital infrastructure to share volumes of very important data like Nandan just illustrated in terms of the, you know, the environmental data that he was talking about, the climate data, and farmers taking huge advantage of that. That is what we have to really, really, really focus on because India. is uniquely positioned in terms of its open networks. And if we can actually keep generating data and then make

James Manyika

I’m being told that we’re going to have to wrap this up. But before we do that, though, I want to just see if we can do a quick lightning round, so to speak. I mean, this summit has been extraordinary. The example that India is setting for the world, quite frankly, is extraordinary. If you could say, each of you say, one thing you’d like to see happen in the next 12 months, particularly with this idea of open networks and change at population scale, what would that be? I don’t know who wants to go first.

Nandan Nilekani

I think I’d like to see massive diffusion where all these applications that are just rolling out on open networks reach millions of farmers and farmers in the world. I think that’s going to be a big deal. I think that’s going to be a big deal. I think that’s going to be a big deal. I think that’s going to be a big deal. I think that’s going to be a big deal. I think that’s going to be a big deal. I think that’s going to be a big deal. and so on, and actually show the world that AI is a force of good. I think we have an obligation to show

James Manyika

That’s good.

Kiran Mazumdar-Shaw

Yeah, I definitely want to see a sustainable standard of care, high -quality universal health care coming out of this AI effort and the health stack.

James Manyika

That’s preventative, presumably.

Kiran Mazumdar-Shaw

Absolutely. Diagnostic, preventative, predictive, and precision, because you can’t do away with treatment. But how do you basically stage it up front?

James Manyika

Sunil or Sangut?

Sunil Wadhwani

Yeah, so yesterday when the Prime Minister spoke at Bharat Mandapam, you know how he has been saying for years, make an India for the world. He said, in the age of AI, let’s develop in India, and let’s deliver to the world. So in our case, is just one little example at Vadvani AI. I’ve given you two or three examples of what we’ve done. But we’ve developed over 25 AI platforms in India in education, healthcare, agriculture, which are scaling up. What’s interesting is over the last year, we’ve had an incredible amount of incoming interest from governments throughout the global south, in Africa, Asia, and so on, who are hungry for these solutions. And they’re looking to India to provide these.

In fact, when PM Modi launched our institute back in 2018, you know, he was saying the U .S. is so far ahead, China is so far ahead. I said, Mr. Prime Minister, we can set the example in India for how AI can be used for societal transformation. No one else is doing that. We are showing how it can be done.

James Manyika

Samgul?

Sangbu Kim

So for the next 12 years, I really want to work more to disseminate the really good use cases to the world. To our country and people. One of the reasons is that… some big challenge for the people in developing world they do not clearly know what they can do with AI even though it can provide really much affordable and easiest way to expand their capability and productivity and intelligence so you know in a very easy way compared to the old old day so once again they can get to know oh this is a real important opportunity for them and then I can believe that they will really find some really good way to fully utilize this one in a very affordable way

James Manyika

no no thank you I mean it is what what I’m taking away is that it’s not just the example that India is setting for India and the world but also quite frankly the example that each of you is setting because all of you in your work and your organization your teams through your initiatives but also your and, quite frankly, insight. I’ve done a lot to show what leaders can do. So I appreciate the examples that you’re setting and the example that India is setting. Please join me in thanking my panelists here. Thank you, and I think with that we’ll draw to a close. Thank you.

Moderator

Just request you all to be seated 30 seconds more. First of all, could we just please have another round of applause for our esteemed panelists? Very insightful. Thank you very much for coming here. The true benefits of AI, the discussion shows, can only be realized when we build for everyone using open networks. Very insightful conversation. Thank you, James, for moderating it. And to further help drive population scale impact, we invite changemakers and researchers to apply for the two Google .org Impact Challenge. One is in AI for Science, one is for Government Innovation. There’s a QR code for you to learn more. And I encourage you all to visit us at Booths 3 and 4 in Hall 5 to see firsthand how Google AI is delivering a real -world impact.

And finally, I just request all the panelists to please join Center Stage for a photograph. Thank you, everyone. Thank you. Thank you. That was great. Thank you so much. I wish you that more this morning. No, no, no. For me personally, this is inspiring. Thank you. Thanks. Okay, all right. Okay. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (17)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Real‑world AI impact depends on “population‑scale” transformation of education, health‑care and agriculture, and can only be achieved through a built‑in coordination layer.”

The knowledge base states that AI’s true potential lies in delivering population-scale impact in education, healthcare and agriculture, and that this requires coordination built into the system [S7] and [S9].

Confirmedmedium

“James Manyika – Google’s senior vice‑president for research, labs and technology in society and former co‑chair of the UN High‑Level Advisory Board on AI – was introduced as the first speaker.”

James Manyika is listed in the knowledge base as Senior Vice-President at Google-Alphabet and Co-Chair of the UN Secretary-General’s High-level Advisory Body on AI [S8].

Confirmedmedium

“The “digital divide must not become an AI divide.””

A knowledge-base entry warns that without proper digital public goods we risk creating an AI divide that could be even more dangerous than the existing digital divide [S97].

Confirmedhigh

“Digital public infrastructure (DPI) and open networks provide the coordination layer that translates human intent into real‑world action.”

The source describes DPI and open networks as the coordination layer enabling AI to turn human intent into real-world outcomes [S9].

Confirmedhigh

“India’s UPI payments system and the Bhashini language‑network exemplify this coordination layer.”

The knowledge base cites India’s UPI and Bhashini (spelled “Bashini” in the source) as leading examples of digital public infrastructure that provide such coordination [S9].

!
Correctionmedium

“A recent $10 million Google.org grant to the Networks for Humanity Foundation is building universal tools such as asset‑tokenisation and open‑network standards across innovation labs from Singapore to Switzerland.”

Google.org announced a $10 million grant for nonprofits to help them integrate AI, but the source does not specify the Networks for Humanity Foundation nor the geographic scope described in the claim [S102].

Additional Contextlow

“UPI is an open‑architecture prototype that grew into the world’s largest payments system.”

The knowledge base highlights UPI as a leading digital public infrastructure and an example of an open architecture, but it does not explicitly state that it is the world’s largest payments system [S9] and [S71].

External Sources (102)
S2
S3
Keynote-Rishad Premji — -Mr. Nandan Nilekani: Role/Title: Not specified; Area of expertise: Artificial intelligence (described as pioneer and th…
S4
High Level Session 2: Digital Public Goods and Global Digital Cooperation — – **Nandan Nilekani** – Co-founder and chairman of Infosys Technologies Limited (participated online) Karianne Tung, Ve…
S5
https://dig.watch/event/india-ai-impact-summit-2026/fireside-conversation-01 — Thank you so much, Mr. Sikka, for your profound and very interesting remarks. And of course, your work at VNI also exemp…
S6
Keynote-Vishal Sikka — -Honorable Ashwini Vasanthaji: Role/Title: Minister, Ministry of IT; Area of expertise: Information Technology -Sunil: …
S7
AI for Social Good Using Technology to Create Real-World Impact — – James Manyika- Sunil Wadhwani – Sangbu Kim- Sunil Wadhwani
S8
A Digital Future for All (afternoon sessions) — – James Manyika – Senior VP, Google-Alphabet and Co-Chair of the Secretary-General’s High-level Advisory Body on Artific…
S9
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — Because we believe that AI’s true potential lies in its ability to deliver population -scale impact, transforming educat…
S10
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — -James Manyika: Senior Vice President, Google Alphabet
S11
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S12
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S13
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S14
AI for Social Good Using Technology to Create Real-World Impact — -Kiran Mazumdar-Shaw: Chairperson of Biocon Group; pioneering biotech entrepreneur, healthcare visionary, and philanthro…
S15
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event moderator or host introd…
S16
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — Our third guest… is Kiran Mamzouma -Shaw. As chairperson of Biocon Group, Kiran is a pioneering biotech… Kiran is a …
S17
Legal Notice: — – Falliere, Nicolas, Liam O. Murchu, and Eric Chien. 2010. W32.Stuxnet Dossier: version 1.3 . online: http://www.symante…
S18
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — 1 ,000 hectares in some big island of Indonesia in order to get the safe efficiency in the next five years. And then we …
S19
AI for Good – food and agriculture — Dongyu Qu: Excellencies, ladies, gentlemen, good morning. A year ago, we all gathered for the Previous AI for Good Summi…
S20
WS #254 The Human Rights Impact of Underrepresented Languages in AI — Gustavo Fonseca Ribeiro: I think Niti’s answer was very good. So very quickly, government support, yes, you can see ex…
S21
Revitalizing Universal Service Funds to Promote Inclusion | IGF 2023 — Coordinated builds, which involve constructing various areas together, are more cost-effective than doing so individuall…
S22
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Melike Yetken Krilla from Google shared examples of Google’s transformative open source contributions, such as the trans…
S23
Pushing the Boundaries of Open Science at CERN: Submission to the UNESCO Open Science Consultation — Open Data does not enforce all data to be openly available without restrictions. It is rather the philosophy that data s…
S24
AI Governance Dialogue: Steering the future of AI — Development | Sociocultural Last year, the Nobel Prize for Chemistry was awarded to the developers of AlphaFold, an AI …
S25
AI for agriculture Scaling Intelegence for food and climate resiliance — A very good morning to all of you. Shri Devesh Chaturvedi ji, Rajesh Agarwal ji, Vikas Rastogi ji. Mr. Jonas Jett, Srima…
S26
Japanese farmers turn to AI to combat pests — Japanese farmers are embracing AI technology toaddressthe challenges posed by climate change and labour shortages in agr…
S27
AI in education: Leveraging technology for human potential — Kevin Mills: Hello. It’s an incredible honor to be here with you today. The last UN gathering I attended was almost exac…
S28
WS #462 Bridging the Compute Divide a Global Alliance for AI — Alisson O’Beirne provided perhaps the most crucial insight for implementation, emphasizing that successful collaboration…
S29
What policy levers can bridge the AI divide? — ## Infrastructure as Foundation Ebtesam Almazrouei: Good afternoon, everyone. It’s our pleasure to have you here today …
S30
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S31
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — According to a policy brief by the UN Secretary, DPI has the potential to contribute to the SDGs by ensuring safe data u…
S32
The future of Digital Public Infrastructure for environmental sustainability — The Digital Public Infrastructure (DPI) is increasingly acknowledged as the cornerstone of a flourishing digital economy…
S33
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S34
WSIS Action Lines for Advancing the Achievement of SDGs | IGF 2023 Open Forum #5 — The African Union is actively developing multiple strategies for digital transformation, with a strong emphasis on the i…
S35
Building Population-Scale Digital Public Infrastructure for AI — Balancing speed of diffusion with safety, especially in health applications
S36
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — It’s production. to say that, okay, there is a return on investment in the enterprise context and there is a reasonable …
S37
Driving Indias AI Future Growth Innovation and Impact — Professor Bhaskar Chakravarti emphasized the critical importance of trust infrastructure beyond technical capabilities, …
S38
AI as critical infrastructure for continuity in public services — “Distributed software development.”[65]. “At Bilenium, recently we have developed as well one dedicated solution, which …
S39
Open Internet Inclusive AI Unlocking Innovation for All — The consumer AI opportunity extends beyond cost reduction to fundamental accessibility improvements. Achieving scale in …
S40
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Anne Flanagan: Hello, apologies that I’m not there in person today. I’m in transit at the moment, hence my picture on yo…
S41
Building Indias Digital and Industrial Future with AI — “India, surely for the vast amount of experience and scale and heterogeneity that it has, offers excellent evidence on w…
S42
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — Financing and Investment Models for Submarine Cables The World Bank ensures that every submarine cable investment inclu…
S43
Building Scalable AI Through Global South Partnerships — Investment mechanisms and funding structures for large-scale AI deployment in resource-constrained environments remain i…
S44
DPI+H – health for all through digital public infrastructure — A global recognition of DPI’s foundational value in healthcare is apparent, though this acknowledgment is coupled with a…
S45
WSIS Action Line C7: E-health – Fostering foundations for digital health transformation in the age of AI — Hani Eskandar: Yes. Okay, so I will really focus on one of the things that is very much in line with the global digital …
S46
AI for Social Good Using Technology to Create Real-World Impact — Kiran Mazumdar-Shaw, chairperson of Biocon Group, presented perhaps the most visionary perspective on AI’s potential in …
S47
Technology in the World / Davos 2025 — Ruth Porat highlights how AI is currently enhancing healthcare by enabling early disease detection and making high-quali…
S48
AI for agriculture Scaling Intelegence for food and climate resiliance — Shankar Maruwada from EkStep Foundation provided the technical framework for scaling AI solutions through digital public…
S49
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Sarah Nicole: Please share your thoughts with us on this issue. Yeah, thank you very much for the invitation to give thi…
S50
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — 1. Establishing effective multi-stakeholder coordination platforms 3. Contextualising Policies and Technologies: 4. En…
S51
Financing Broadband Networks of the Future to bridge digital — Alejandro Solano Diaz:Thank you. Yes, it’s important for the economy and the social development to have proper networks….
S52
Promoting policies that make digital trade work for all (OECD) — This underscores the importance of continued investment in developing networking platforms to foster collaborations and …
S53
Trade in environmentally sound technologies: Opportunities and challenges for developing countries (DCO) — In terms of governance and transitioning to a clean energy economy, the analysis argues that experimenting with new meth…
S54
The Innovation Beneath AI: The US-India Partnership powering the AI Era — -Energy Grid Transformation and Clean Power: Detailed exploration of how AI’s massive energy demands require “programmab…
S55
Lightning Talk #209 Safeguarding Diverse Independent NeWS Media in Policy — ## Background and Research Context None identified beyond those in the speakers names list.
S56
Laying the foundations for AI governance — Lan Xue: Okay. I think my job is easier. I can say I agree with all of them. So I think that’s probably the easiest way….
S57
morning session — In addition to the discussions surrounding confidence-building measures and the BWC, this expanded summary also emphasiz…
S58
Table of contents — + Even though Estonia is esteemed as a digital country in the world, our attention and resources are largely directed to…
S59
Global AI Policy Framework: International Cooperation and Historical Perspectives — So the infrastructure is missing, right? Now, if you’re talking about policies related to compute, you’re talking about …
S60
Fireside Conversation: 02 — The discussion addresses India’s positioning in AI development, with the moderator referencing Prime Minister Modi’s sta…
S61
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Raghav identifies the lack of synergy between central and state governments as the primary obstacle to scaling data‑cent…
S62
The Foundation of AI Democratizing Compute Data Infrastructure — Good. So, Chennai, and coming back this way to all my panelists, what is the single biggest barrier? And I can imagine t…
S63
Agentic AI in Focus Opportunities Risks and Governance — That’s a hard one, but I was thinking in keeping with the theme of this summit, which is very much about inclusive AI, I…
S64
Driving Social Good with AI_ Evaluation and Open Source at Scale — Moderate disagreement with significant implications. The disagreements reflect deeper tensions between technical efficie…
S65
Discussion Report: AI-Native Business Transformation at Davos — – Yutong Zhang- Richard Socher Mhatre highlights the exponential improvement in AI economics, with inference costs drop…
S66
AI Infrastructure and Future Development: A Panel Discussion — Sarah Friar detailed OpenAI’s approach to financing, which includes traditional equity rounds, warrant structures with c…
S67
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — Alina Ustinova: Hello, everyone. My name is Alina. I represent the Center for Global IT Cooperation, and today I want to…
S68
How AI Drives Innovation and Economic Growth — “Farmers respond to these AI weather forecasts.”[30]. “So there’s a strong rationale for national governments, in some c…
S69
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — Data, artificial intelligence (AI), and new technologies have the potential to greatly benefit agriculture by assisting …
S70
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S71
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S72
Digital Public Infrastructure, Policy Harmonization, and Digital Cooperation — Marie Ndé Sene Ahouantchede explains that ECOWAS views public digital infrastructure as built on three pillars: payment …
S73
DPI+H – health for all through digital public infrastructure — Garrett Mehl:Great, I just wanna thank PATH for helping to organize this session and for also inviting WHO to this impor…
S74
Press Conference: Closing the AI Access Gap — Moreover, the speakers argue that AI can drive productivity, creativity, and overall economic growth. It has the capacit…
S75
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — “It can deal with multilinguality and voice.”[51]. “There’s firstly a lot of opportunity to bridge some of these inequit…
S76
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S77
AI for Social Good Using Technology to Create Real-World Impact — “But I think open networks allows many actors, many innovators to build applications on the edge using AI.”[5]. “And I t…
S78
Building Population-Scale Digital Public Infrastructure for AI — Balancing speed of diffusion with safety, especially in health applications
S79
Open Internet Inclusive AI Unlocking Innovation for All — The consumer AI opportunity extends beyond cost reduction to fundamental accessibility improvements. Achieving scale in …
S80
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Great point. I think compute data, data stack for the country, I think very important. Let me come to Venu. Again, the s…
S81
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And it’s very useful. It’s used to benchmark applications and performance on quantum computers and using AI techniques a…
S82
AI as critical infrastructure for continuity in public services — “Distributed software development.”[65]. “At Bilenium, recently we have developed as well one dedicated solution, which …
S83
From Innovation to Impact_ Bringing AI to the Public — Thank you, Vijay. Fantastic and energetic talk. Thank you. So, a little while ago, you told me that LLM, Foundation Mode…
S84
Building Indias Digital and Industrial Future with AI — “India, surely for the vast amount of experience and scale and heterogeneity that it has, offers excellent evidence on w…
S85
Building Scalable AI Through Global South Partnerships — Investment mechanisms and funding structures for large-scale AI deployment in resource-constrained environments remain i…
S86
Procuring modern security standards by governments&amp;industry | IGF 2023 Open Forum #57 — Audience:My name is Satish, and I’m from India. And I’m going to share two slides on what, or three slides on what we’re…
S87
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — -Moderator: Session moderator who introduced speakers and managed the event flow.
S88
Powering AI Global Leaders Session AI Impact Summit India — -Speaker: Role/title not specified, appears to be a moderator or host introducing the session and thanking partners A n…
S89
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned. But if I go…
S90
AI UN Secretary-General kicks off UNGA 78 with high prominence of AI and digital issues — TheGeneral Debate of the 78 UN General Assemblystarted with the UN Secretary-General Antonio Guterres’s presentation of …
S91
Dedicated stakeholder session (in accordance with agreed modalities for the participation of stakeholders of 22 April 2022)/OEWG 2025 — Cuba: Thank you, Chairman. As the representatives of a developing country, we attach a high level of importance to cap…
S93
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — “We also, along with my colleague Vinod, are large investors in Sarvam, which is providing sovereign AI capabilities to …
S94
Folding Science / DAVOS 2025 — Alison Snyder: Thank you all for being here this morning. Thank you to those of you watching online. In industry the b…
S95
Breakthroughs in human-centric bioscience with AI — During the 2020-2021 COVID-19 pandemic, AI models dramatically sped up vaccine development, screening immune system targ…
S96
Keynote-Demis Hassabis — Ladies and gentlemen, let’s have a big round of applause for Mr. Ambani. And now I would like to invite Sir Damis Hassab…
S97
Dynamic Coalition Collaborative Session — Development | Economic | Infrastructure Rajendra warns that without proper classification of certain technologies as di…
S98
A view on digital divide and economic development — Hence, even thoughICTs provide opportunities for economic growth and social development, they have the potential to excl…
S99
Networking Session #74 Digital Innovations Forum- Solutions for the Offline People — Rajnesh Singh expresses concern about widening digital divides across various layers, including infrastructure, devices,…
S100
Leaders TalkX: ICT Applications Unlocking the Full Potential of Digital – Part II — Anil Kumar Lahoti:Thank you, Dana. First of all, I thank ITU for inviting me to this plus 20, and I consider this as my …
S101
AI push in India: Google tackles language and farming challenges — Google isintensifyingits AI initiatives in India, with a focus on addressing language barriers and improving agricultura…
S102
Nonprofits receive $10 million boost from Google for AI training — Google.orghas announceda $10 million grant initiative aimed at helping nonprofits integrate AI into their operations. Co…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
James Manyika
10 arguments153 words per minute2285 words891 seconds
Argument 1
Coordination Layer via Open Networks
EXPLANATION
James argues that digital public infrastructure and open networks act as a coordination layer that lets AI translate human intent into concrete actions. This layer is essential for delivering population‑scale impact across sectors.
EVIDENCE
He explains that digital public infrastructure and open networks provide the coordination layer that allows AI to translate human intent into real-world action, citing the rapid progress of AI and the need to avoid a digital and AI divide [19-21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit opening stresses that AI’s population-scale impact depends on built-in coordination mechanisms [S7], and Manyika’s own remarks describe open networks as the coordination rail needed for early-stage AI deployment [S8]; policy analysis also frames infrastructure as a coordination foundation [S21].
MAJOR DISCUSSION POINT
Coordination Layer via Open Networks
AGREED WITH
Moderator, Nandan Nilekani, Sunil Wadhwani, Sangbu Kim
DISAGREED WITH
Sangbu Kim
Argument 2
AI‑powered Agents Transform Agriculture Services
EXPLANATION
James describes a Gemini‑powered open network deployed in Uttar Pradesh that gives farmers multilingual AI agents for credit, crop prediction and other services. These agents illustrate how AI can directly empower smallholder farmers at scale.
EVIDENCE
He details the pilot of a Gemini-powered open network for agriculture that provides farmers with multilingual AI agents to facilitate everything from credit to crop prediction, noting measurable impact and the model’s potential for global replication [30-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Case studies of AI-driven pest forecasting for Japanese farmers illustrate how agents can empower growers [S26]; similar pilots in Indonesia show AI-based crop-selection tools at scale [S18]; broader discussions of AI for agriculture and food security reinforce the transformative potential [S25, S19].
MAJOR DISCUSSION POINT
AI‑powered Agents Transform Agriculture Services
AGREED WITH
Nandan Nilekani, Sangbu Kim
Argument 3
Emphasis on Local Language Access in AI Deployments
EXPLANATION
James stresses that AI solutions must be delivered in local languages to be inclusive and effective. Language accessibility is a key factor for scaling AI impact.
EVIDENCE
He remarks on the importance of doing AI work in local languages, underscoring the need for language-specific deployments [83].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion repeatedly highlights the importance of delivering AI in users’ native languages to achieve inclusion [S7], and concrete examples of language-specific dataset initiatives in Rwanda and Nigeria provide supporting evidence [S20].
MAJOR DISCUSSION POINT
Emphasis on Local Language Access in AI Deployments
AGREED WITH
Nandan Nilekani, Kiran Mazumdar‑Shaw
Argument 4
Plug‑in New Models into Networks Highlights Inference Cost Importance
EXPLANATION
James points out that open networks allow new, improved AI models—such as weather forecasts—to be quickly integrated and delivered to millions of users. This demonstrates why low‑cost inference is critical for scaling.
EVIDENCE
He gives the example of plugging Google’s improved weather model into the AgriConnect open network, instantly giving 10 million farmers access to more accurate forecasts, illustrating the power of modular, low-cost inference [252-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for cheap inference is underscored by remarks on inference optimization within open networks [S7] and by a broader call to bridge the compute divide for scalable AI [S28]; sustainability considerations for AI inference are also discussed in a dedicated session on building sustainable AI systems [S10].
MAJOR DISCUSSION POINT
Plug‑in New Models into Networks Highlights Inference Cost Importance
AGREED WITH
Nandan Nilekani, Sunil Wadhwani
Argument 5
Freely Available AlphaFold Data Illustrates Power of Open Scientific Data
EXPLANATION
James highlights AlphaFold’s open protein‑structure database as a case where freely shared scientific data has accelerated global research. The widespread adoption shows the impact of open data on innovation.
EVIDENCE
He notes that AlphaFold solved a 50-year protein-structure challenge and that its freely available database has been used by more than 3 million researchers in over 190 countries, with India being the fourth largest adopter [14-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AlphaFold’s open-source release and its role in accelerating protein-structure research are highlighted as a model of open scientific data [S22]; the Nobel-winning impact and worldwide adoption of AlphaFold are documented in a review of its scientific breakthroughs [S24]; open-science principles further contextualize the importance of freely shared data [S23].
MAJOR DISCUSSION POINT
Freely Available AlphaFold Data Illustrates Power of Open Scientific Data
Argument 6
Multilingual AI agents for frontline health workers to combat child malnutrition
EXPLANATION
James describes AI‑driven multilingual assistance provided to 1.4 million frontline health workers, enabling early warnings and interventions to address child malnutrition at scale.
EVIDENCE
He states that in healthcare, AI empowers 1.4 million frontline workers with multilingual AI assistance, providing early warnings to combat child malnutrition across the country [43-44].
MAJOR DISCUSSION POINT
AI‑enabled multilingual support for health workers
Argument 7
AI integration into national pest surveillance system to protect crops
EXPLANATION
James explains that AI is being embedded into India’s national pest surveillance system, allowing real‑time monitoring and protection of the country’s most important crops at a national scale.
EVIDENCE
He mentions that integrating AI into the national pest surveillance system protects India’s most important crops at a national scale [44-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-based pest-outbreak forecasting tools deployed for Japanese farmers demonstrate how real-time surveillance can protect crops at national scale [S26]; similar AI applications for agricultural resilience are discussed in regional AI-for-good sessions [S18].
MAJOR DISCUSSION POINT
AI‑powered pest surveillance for agricultural resilience
Argument 8
AI‑led transformation of government‑owned education platforms reaching tens of millions
EXPLANATION
James outlines an initiative that uses AI to transform government education platforms, already reaching 10 million learners with a target of 75 million by 2027.
EVIDENCE
He notes that the AI-driven education initiative has already reached 10 million students and educators, aiming to empower up to 75 million students and nearly 2 million educators by the end of 2027 [47-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled education platforms reaching millions of learners are described in a session on AI in education, emphasizing scalability and impact [S27]; additional remarks on large-scale AI education initiatives provide further context [S25].
MAJOR DISCUSSION POINT
Scaling AI‑enhanced public education at population level
Argument 9
Bold yet responsible AI requires coordinated digital infrastructure to close the AI divide
EXPLANATION
James calls for pursuing ambitious AI possibilities while simultaneously building coordination layers that bridge and close the AI divide, emphasizing responsibility alongside innovation.
EVIDENCE
He states that we must pursue AI’s most ambitious possibilities while ensuring we build the coordination layer necessary to bridge and close the AI divide [48-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit’s framing of coordination as essential for responsible AI deployment is reiterated in opening remarks [S7] and Manyika’s own commentary on building coordination layers while pursuing ambitious AI goals [S8]; policy analysis of infrastructure as a bridge for the AI divide supports this view [S21].
MAJOR DISCUSSION POINT
Balancing ambitious AI development with responsible coordination
Argument 10
Google.org grants fund open‑network tools and change‑maker initiatives
EXPLANATION
James highlights philanthropic funding from Google.org that supports the development of universal open‑network tools and backs change‑makers building AI solutions for societal impact.
EVIDENCE
He references a $10 million Google.org grant to the Networks for Humanity Foundation for building universal tools, and mentions supporting change makers like Wadwani AI through Google.org grants [39-42].
MAJOR DISCUSSION POINT
Philanthropic funding accelerates open‑network AI infrastructure and innovators
N
Nandan Nilekani
5 arguments181 words per minute881 words290 seconds
Argument 1
Open Networks Enable Multitude of Innovators & Agents
EXPLANATION
Nandan explains that open networks let many innovators build applications on top of AI, and that agents simplify complex transactions for end‑users. This openness drives massive diffusion of technology.
EVIDENCE
He states that open networks allow many actors and innovators to build applications using AI, and that agents remove complexity for users such as farmers or small electricity producers, enabling inclusion at massive scale [79-81].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes that open networks allow many innovators to build AI-powered applications, a point echoed in the summit’s coordination narrative [S7] and reinforced by Manyika’s remarks on open-network ecosystems [S8].
MAJOR DISCUSSION POINT
Open Networks Enable Multitude of Innovators & Agents
AGREED WITH
James Manyika, Sangbu Kim
Argument 2
Multilingual Agents Remove Language Barrier for Users
EXPLANATION
Nandan argues that AI agents speaking users’ native languages eliminate language as a barrier, making services accessible to everyone regardless of linguistic diversity. This is crucial for inclusive adoption.
EVIDENCE
He describes agents that interact with users in their own language, removing complexity and achieving massive inclusion, and later emphasizes that combining language-native agents with hidden transaction complexity is the “holy grail” for universal adoption [80-82][94-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence of language-specific dataset initiatives in Rwanda and Nigeria illustrates how native-language agents can eliminate linguistic barriers [S20]; the broader emphasis on local-language AI for inclusion is also highlighted in the summit’s opening remarks [S7].
MAJOR DISCUSSION POINT
Multilingual Agents Remove Language Barrier for Users
AGREED WITH
James Manyika, Kiran Mazumdar‑Shaw
Argument 3
Need for Dramatically Lower Inference Costs for Scale
EXPLANATION
Nandan stresses that the cost of AI inference must fall dramatically for AI to serve billions affordably. High per‑query costs would prevent population‑scale deployment.
EVIDENCE
He notes that if serving a single query costs hundreds of rupees, the model won’t work, and calls for a focus on making inference cheap as training models stabilises [246-248].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A dedicated session on bridging the compute divide stresses that affordable inference is critical for scaling AI to billions of users [S28]; sustainability and cost-efficiency of AI inference are further discussed in a workshop on building sustainable AI systems [S10]; infrastructure cost considerations are outlined in policy briefs on AI divide mitigation [S21].
MAJOR DISCUSSION POINT
Need for Dramatically Lower Inference Costs for Scale
AGREED WITH
James Manyika, Sunil Wadhwani
DISAGREED WITH
Sunil Wadhwani
Argument 4
Plugging Updated Models into Open Networks to Reach Millions Internationally
EXPLANATION
Nandan illustrates that once an open network exists, new AI models—like improved weather forecasts—can be plugged in and instantly reach tens of millions of users, highlighting the scalability of open architectures.
EVIDENCE
He repeats the example of integrating Google’s latest weather model into the AgriConnect network, instantly providing 10 million farmers with updated forecasts, showing how open networks enable rapid, large-scale diffusion [252-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The ability to rapidly integrate improved models (e.g., weather forecasts) into existing open networks is highlighted as a key scalability advantage in the coordination discussion [S7] and reinforced by compute-divide considerations for rapid model deployment [S28].
MAJOR DISCUSSION POINT
Plugging Updated Models into Open Networks to Reach Millions Internationally
Argument 5
Open networks can enable peer‑to‑peer energy trading services
EXPLANATION
Nandan illustrates that an open network architecture allows individuals with rooftop solar to sell excess electricity to others via AI‑driven agents, creating new decentralized energy markets.
EVIDENCE
He describes a scenario where a farmer with rooftop solar can sell surplus energy to another person through an agent interface, emphasizing the simplicity of the commerce interaction [258-263].
MAJOR DISCUSSION POINT
Open networks facilitate decentralized energy trading and new market services
AGREED WITH
James Manyika
S
Sunil Wadhwani
6 arguments166 words per minute1463 words528 seconds
Argument 1
DPI Provides Data Pipelines & Distribution Channels
EXPLANATION
Sunil explains that Digital Public Infrastructure supplies the data streams and distribution mechanisms needed for AI models to be built and deployed at scale in the public sector.
EVIDENCE
He states that DPI provides data and data pipelines essential for AI, and also offers distribution channels that allow inference models to be delivered at scale, without which usage would be costly and limited [170-174].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy analysis identifies digital public infrastructure as the foundational layer for data pipelines and distribution mechanisms needed for AI at scale [S21]; broader governance discussions on bridging the AI divide also cite DPI as essential [S29].
MAJOR DISCUSSION POINT
DPI Provides Data Pipelines & Distribution Channels
AGREED WITH
Moderator, James Manyika, Nandan Nilekani, Sangbu Kim
DISAGREED WITH
Nandan Nilekani
Argument 2
AI‑based TB Diagnosis & Reading Assessment Demonstrate Health & Education Impact
EXPLANATION
Sunil presents two concrete AI applications: a cough‑sound TB diagnostic that increased case detection by 25 % and a 20‑second reading‑assessment tool costing 5 paise per student, both showing AI’s potential in health and education.
EVIDENCE
He describes developing a smartphone-based TB diagnosis from cough sounds that raised detection by 25 % nationally, and an AI system that assesses a child’s reading ability in 20 seconds at a cost of 5 paise per student, both enabled by DPI platforms [196-204][208-214].
MAJOR DISCUSSION POINT
AI‑based TB Diagnosis & Reading Assessment Demonstrate Health & Education Impact
Argument 3
Ultra‑Low Cost Solutions (5 paise per student) Show Feasibility
EXPLANATION
Sunil highlights that delivering AI‑driven reading diagnostics at a cost of five paise per child proves that large‑scale AI interventions can be financially viable for low‑income populations.
EVIDENCE
He notes that the reading-assessment system costs only 5 paise per student, and after a successful pilot it was mandated for millions of children across several Indian states, demonstrating cost-effective scaling [211-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sustainable AI workshops discuss ultra-low-cost inference solutions as a pathway to affordable large-scale impact [S10]; the compute-divide briefing further emphasizes cost-effective AI deployment models [S28].
MAJOR DISCUSSION POINT
Ultra‑Low Cost Solutions (5 paise per student) Show Feasibility
AGREED WITH
Nandan Nilekani, James Manyika
Argument 4
DPI Platforms (NIXA) Supply Critical Health Data for AI
EXPLANATION
Sunil describes the NIXA patient‑management system as a large health data platform that gave his team access to nationwide TB data, enabling AI models for diagnosis, rapid testing, and adherence prediction.
EVIDENCE
He explains that the government’s DPI called NIXA provided a comprehensive TB patient database, which his team used to develop AI models for cough-based diagnosis, same-day lab results, and adherence prediction, dramatically improving care pathways [190-196].
MAJOR DISCUSSION POINT
DPI Platforms (NIXA) Supply Critical Health Data for AI
AGREED WITH
Kiran Mazumdar‑Shaw, James Manyika
Argument 5
Global South Interest in Indian AI Platforms for Scale
EXPLANATION
Sunil notes that governments across the Global South are reaching out to India for AI solutions, indicating strong international demand for the platforms his institute has built.
EVIDENCE
He mentions a surge of interest from African and Asian governments seeking Indian AI platforms, with multiple countries expressing desire to adopt the solutions his institute developed [321-323].
MAJOR DISCUSSION POINT
Global South Interest in Indian AI Platforms for Scale
Argument 6
Multi‑sector AI platform portfolio built on DPI scales solutions across health, education, and agriculture
EXPLANATION
Sunil notes that his institute has created more than 25 AI platforms covering education, healthcare, and agriculture, all leveraging Digital Public Infrastructure to achieve large‑scale deployment.
EVIDENCE
He states, “we’ve developed over 25 AI platforms in India in education, healthcare, agriculture, which are scaling up” and links this scaling to the underlying DPI framework [320-321].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit’s cross-sectoral AI narrative highlights how open networks enable health, education, and agriculture platforms to scale together [S7]; specific sessions on AI in education and agriculture provide concrete examples of multi-sector deployment [S27, S25].
MAJOR DISCUSSION POINT
Broad DPI‑enabled AI platform suite drives cross‑sectoral impact at scale
S
Sangbu Kim
3 arguments126 words per minute598 words282 seconds
Argument 1
Open Standards Essential for User‑Centric Services
EXPLANATION
Sangbu argues that open standards and open networks are crucial to delivering user‑centric, affordable services, especially as AI becomes the dominant technology layer.
EVIDENCE
He states that open standards and open networks are essential for ensuring user-centric services that are efficient and affordable in the AI era [105-106].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-science principles stress that open standards are crucial for affordable, user-centric services in the AI era [S23]; discussions of open-source contributions such as AlphaFold illustrate the broader importance of open standards [S22].
MAJOR DISCUSSION POINT
Open Standards Essential for User‑Centric Services
AGREED WITH
Moderator, James Manyika, Nandan Nilekani, Sunil Wadhwani
DISAGREED WITH
James Manyika
Argument 2
AgriConnect Improves Farmer Efficiency & Can Extend to Other Sectors
EXPLANATION
Sangbu describes AgriConnect as a farmer‑focused, open‑stack platform that delivers coherent services and can be expanded beyond agriculture to health and education, illustrating its universal applicability.
EVIDENCE
He explains that AgriConnect provides coherent, consistent services through an open stack, improving farmer efficiency, and that the model is being prepared for expansion into health and education sectors [102-109].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven pest-forecasting tools for Japanese farmers demonstrate how open-stack platforms improve farmer efficiency [S26]; regional pilots in Indonesia showcase similar efficiency gains in agriculture [S18]; scaling of the AgriConnect model to other countries is discussed in AI-for-good sessions [S25].
MAJOR DISCUSSION POINT
AgriConnect Improves Farmer Efficiency & Can Extend to Other Sectors
AGREED WITH
James Manyika, Nandan Nilekani
Argument 3
Replicating AgriConnect Model to Africa, Brazil, Philippines
EXPLANATION
Sangbu outlines the World Bank’s effort to replicate the Indian AgriConnect blueprint in several African countries, Brazil and the Philippines, emphasizing the need for a standardized, scalable model.
EVIDENCE
He notes that the World Bank is expanding the AgriConnect model to three African countries, Brazil and the Philippines, and is working on identifying the simplest, most replicable model for other nations [229-233].
MAJOR DISCUSSION POINT
Replicating AgriConnect Model to Africa, Brazil, Philippines
K
Kiran Mazumdar‑Shaw
5 arguments0 words per minute0 words1 seconds
Argument 1
Health Stack & AI Risk Profiling Enable Universal Care
EXPLANATION
Kiran proposes building a comprehensive health data stack that combines phenotypic, genomic, demographic and treatment data, enabling AI‑driven risk profiling and insurance integration to move toward universal, preventive healthcare.
EVIDENCE
She outlines India’s emerging health data stack-including phenotypic, genomic, demographic and radiological data-and argues that, using AI, this can support rapid risk profiling, insurance integration and universal care delivery [119-128].
MAJOR DISCUSSION POINT
Health Stack & AI Risk Profiling Enable Universal Care
AGREED WITH
Sunil Wadhwani, James Manyika
Argument 2
Open Health Data Stack Enables Risk Profiling & Insurance Integration
EXPLANATION
Kiran emphasizes that an open, consent‑based health data platform, similar to UPI, can be leveraged for AI‑driven risk assessment and the creation of innovative insurance products, accelerating universal healthcare.
EVIDENCE
She points to India’s open-source, consent-based health data stack and suggests applying the same principles used in UPI to health, allowing AI to quickly risk-profile populations and integrate insurance mechanisms [119-128].
MAJOR DISCUSSION POINT
Open Health Data Stack Enables Risk Profiling & Insurance Integration
AGREED WITH
James Manyika, Nandan Nilekani
Argument 3
AI Can Learn Energy‑Efficient Computation from Biology
EXPLANATION
Kiran notes that biological systems compute using only sips of energy, unlike data‑center AI models that consume gigawatts, and suggests AI can adopt these energy‑efficient principles from biology.
EVIDENCE
She states that biology works through distributed data centers using sips of energy rather than gigawatts, implying AI can learn to compute more efficiently from biological processes [271-273].
MAJOR DISCUSSION POINT
AI Can Learn Energy‑Efficient Computation from Biology
Argument 4
Virtual Cell Modeling & Reprogramming Cells as Future Frontier
EXPLANATION
Kiran envisions a future where AI helps reprogram cancer cells, create virtual cell models, and advance regenerative medicine, positioning this convergence as a transformative frontier for medicine.
EVIDENCE
She describes ambitions to reprogram cancer cells into non-malignant ones, develop virtual cell models, and explore regenerative science, framing these goals as the “holy grail” of future medicine [141-144].
MAJOR DISCUSSION POINT
Virtual Cell Modeling & Reprogramming Cells as Future Frontier
Argument 5
Empowering community health workers (ASHA) with AI to extend primary care
EXPLANATION
Kiran highlights that India’s ASHA community health workers can be equipped with AI tools, enabling them to deliver health services more effectively at the grassroots level and advancing universal healthcare.
EVIDENCE
She notes that by deploying AI for the common people, the ASHA workforce can be empowered with AI, making it even more powerful for reaching the masses [130-132].
MAJOR DISCUSSION POINT
AI‑augmented community health workers for universal care
M
Moderator
3 arguments121 words per minute338 words167 seconds
Argument 1
Population‑scale AI impact requires built‑in coordination mechanisms
EXPLANATION
The moderator asserts that AI can only achieve true population‑scale benefits if there is coordination embedded within the system that guides its deployment and use.
EVIDENCE
He opens the summit by stating that AI’s true potential lies in delivering population-scale impact, but that such impact can only be possible when there is coordination built into the system [1-3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Opening remarks stress that coordination built into AI systems is essential for population-scale impact [S7]; Manyika’s discussion of coordination layers reinforces this point [S8]; infrastructure policy briefs underline coordination as a foundational element [S21].
MAJOR DISCUSSION POINT
Need for systemic coordination to realize AI’s population‑scale impact
Argument 2
Open networks and digital public infrastructure serve as a global coordination rail for AI
EXPLANATION
The moderator frames the purpose of the summit as exploring how open networks and digital public infrastructure can create an interoperable layer that translates human intent into concrete actions across borders.
EVIDENCE
He explains that today’s discussion will focus on how open networks and digital public infrastructure can create a global, interoperable coordination rail powered by AI to translate intent into action across borders [3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit’s framing positions open networks as the interoperable coordination rail for AI across borders [S7]; Manyika’s remarks echo this vision of open networks as a global coordination layer [S8]; policy analyses on infrastructure as a bridge for the AI divide provide additional context [S21].
MAJOR DISCUSSION POINT
Open networks as the coordination layer for global AI deployment
Argument 3
Call for broader participation through the Google.org Impact Challenge
EXPLANATION
In closing, the moderator invites changemakers and researchers to apply for two Google.org Impact Challenges—one for AI for Science and another for Government Innovation—to accelerate population‑scale AI impact.
EVIDENCE
He announces the two Google.org Impact Challenges and encourages attendees to visit the exhibition booths to see real-world AI impact, thereby mobilising further participation [342-345].
MAJOR DISCUSSION POINT
Mobilising innovators via impact challenges to scale AI solutions
K
Kiran Mazumdar-Shaw
3 arguments145 words per minute1175 words485 seconds
Argument 1
AI‑driven shift to preventive, primary‑care health system
EXPLANATION
Kiran stresses that AI should enable a transition from hospital‑centric models to community‑based, preventive medicine, using predictive tools to keep people healthy before disease strikes.
EVIDENCE
She states that AI can support predictive and preventive medicine, allowing a shift from hospital-centric care to primary and community care, highlighting this as a key future direction [148-149].
MAJOR DISCUSSION POINT
AI enables transition to preventive, primary‑care health systems
Argument 2
Goal of a sustainable, high‑quality universal health‑care standard through AI
EXPLANATION
In the lightning‑round, Kiran expresses a desire to see AI produce a sustainable, high‑quality universal health‑care standard that can be delivered at scale across the population.
EVIDENCE
She says, “I definitely want to see a sustainable standard of care, high-quality universal health care coming out of this AI effort and the health stack” [311-313].
MAJOR DISCUSSION POINT
AI should deliver a sustainable, universal health‑care standard
Argument 3
AI can encode complex exclusion‑inclusion criteria for health interventions
EXPLANATION
Kiran points out that AI systems can be programmed with detailed exclusion‑inclusion rules, enabling nuanced risk profiling and tailored insurance products for health care.
EVIDENCE
She notes that “AI can be given a lot of exclusion-inclusion criteria, which it can adopt” when discussing risk profiling and insurance integration [128].
MAJOR DISCUSSION POINT
AI’s capacity to handle sophisticated eligibility criteria in health applications
Agreements
Agreement Points
Open networks and digital public infrastructure act as a coordination layer that enables AI to translate human intent into concrete actions at population scale.
Speakers: Moderator, James Manyika, Nandan Nilekani, Sunil Wadhwani, Sangbu Kim
Coordination Layer via Open Networks Open Networks Enable Multitude of Innovators & Agents DPI Provides Data Pipelines & Distribution Channels Open Standards Essential for User‑Centric Services
All speakers stress that open, interoperable digital public infrastructure provides the coordination rail needed for AI to reach billions of users, allowing new models to be plugged in and many innovators to build applications on top of it [1-3][19-21][73-80][170-174][105-106][252-254].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with the DPI-for-Health policy emphasis on coordinated public data stacks as a foundation for large-scale AI services [S44][S45] and reflects broader calls for multi-stakeholder coordination platforms in digital public infrastructure frameworks [S50].
Multilingual/local‑language AI agents are essential to remove language barriers and achieve inclusive, massive diffusion of AI services.
Speakers: James Manyika, Nandan Nilekani, Kiran Mazumdar‑Shaw
Emphasis on Local Language Access in AI Deployments Multilingual Agents Remove Language Barrier for Users Open Health Data Stack Enables Risk Profiling & Insurance Integration
The panel repeatedly notes that delivering AI in users’ native languages-through multilingual agents or language-specific datasets-eliminates a key barrier to adoption and is the “holy grail” for universal inclusion [83][80-82][94-95][91-94].
Dramatically lowering AI inference costs is a prerequisite for population‑scale impact.
Speakers: Nandan Nilekani, James Manyika, Sunil Wadhwani
Need for Dramatically Lower Inference Costs for Scale Plug‑in New Models into Networks Highlights Inference Cost Importance Ultra‑Low Cost Solutions (5 paise per student) Show Feasibility
Speakers agree that if a single inference query remains expensive, AI cannot be deployed at scale; therefore the focus must shift from ever-larger models to cheap, efficient inference and ultra-low-cost service delivery [246-248][252-254][211-212].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry leaders note that falling inference costs are critical for scaling AI, as highlighted in discussions on AI-native business transformation and OpenAI’s custom inference chip initiatives [S65][S66].
Open network architectures enable new decentralized services such as peer‑to‑peer energy trading.
Speakers: Nandan Nilekani, James Manyika
Open networks can enable peer‑to‑peer energy trading services
Both illustrate how an open network lets a farmer with rooftop solar sell excess power to another user via a simple AI-driven agent, demonstrating the broader economic potential of open digital rails [258-263][256-263].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses on clean-energy transitions stress the need for networked governance and programmable power grids to support peer-to-peer energy markets [S53][S54].
AI‑powered agents and open networks transform agriculture by delivering weather forecasts, pest surveillance, credit and crop‑prediction services to farmers.
Speakers: James Manyika, Nandan Nilekani, Sangbu Kim
AI‑powered Agents Transform Agriculture Services Open Networks Enable Multitude of Innovators & Agents AgriConnect Improves Farmer Efficiency & Can Extend to Other Sectors
The panel cites the Gemini-powered AgriConnect pilot, weather-model plug-ins, and pest-surveillance integration as examples of how open, AI-enabled services boost farmer productivity and can be replicated globally [30-33][44-45][79-81][102-108][252-254].
POLICY CONTEXT (KNOWLEDGE BASE)
The role of open protocols in agri-AI scaling has been documented in DPI-based frameworks and case studies on AI weather forecasting and agritech deployments [S48][S68][S69].
AI‑driven health solutions built on open data stacks and DPI can enable universal, preventive care and improve outcomes such as TB detection and early‑grade reading assessment.
Speakers: Kiran Mazumdar‑Shaw, Sunil Wadhwani, James Manyika
Health Stack & AI Risk Profiling Enable Universal Care DPI Platforms (NIXA) Supply Critical Health Data for AI AI‑enabled multilingual support for health workers
All three highlight that a consent-based health data stack (phenotypic, genomic, demographic) combined with AI risk profiling, community health-worker tools, and DPI-backed data pipelines can deliver scalable preventive health and education services, as shown in TB cough-analysis and rapid reading diagnostics [119-128][130-132][190-206][43-48].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple reports underscore DPI’s foundational value for universal health coverage, citing AI applications for TB detection and early-grade assessment within digital health strategies [S44][S45][S46][S47].
Similar Viewpoints
Both argue that open, interoperable networks provide the coordination layer that lets many innovators build AI‑driven applications and agents, thereby accelerating diffusion [19-21][73-80].
Speakers: James Manyika, Nandan Nilekani
Coordination Layer via Open Networks Open Networks Enable Multitude of Innovators & Agents
Both emphasize that an open health data stack, analogous to UPI, supplies the data and pipelines needed for AI‑based risk profiling, insurance integration and large‑scale disease‑control programmes [119-128][190-196].
Speakers: Kiran Mazumdar‑Shaw, Sunil Wadhwani
Health Stack & AI Risk Profiling Enable Universal Care DPI Platforms (NIXA) Supply Critical Health Data for AI
Both stress that open standards and modular network architectures allow new AI models (e.g., weather forecasts) to be quickly integrated, keeping services affordable and user‑centric [105-106][252-254].
Speakers: Sangbu Kim, James Manyika
Open Standards Essential for User‑Centric Services Plug‑in New Models into Networks Highlights Inference Cost Importance
Both point out that without dramatically reduced inference costs, even ultra‑cheap solutions cannot be scaled; they cite the 5 paise reading assessment as a proof‑of‑concept for affordable AI at scale [211-212][246-248].
Speakers: Sunil Wadhwani, Nandan Nilekani
Ultra‑Low Cost Solutions (5 paise per student) Show Feasibility Need for Dramatically Lower Inference Costs for Scale
Unexpected Consensus
Applying open, consent‑based data‑sharing models from payments (UPI) to health and education sectors.
Speakers: Kiran Mazumdar‑Shaw, Sunil Wadhwani, James Manyika
Health Stack & AI Risk Profiling Enable Universal Care DPI Platforms (NIXA) Supply Critical Health Data for AI Freely Available AlphaFold Data Illustrates Power of Open Scientific Data
While open data is common in scientific research, the panel unexpectedly converges on the idea that the same open-network, consent-driven approach used for financial transactions can be replicated in health (risk profiling, TB data) and education (reading diagnostics), signalling a cross-sectoral shift toward openness [119-128][190-196][14-16].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy harmonisation discussions advocate extending consent-driven data-sharing mechanisms, originally pioneered in payment systems, to health and education data ecosystems [S50].
Overall Assessment

There is strong, cross‑speaker consensus that open digital public infrastructure, multilingual AI agents, and ultra‑low inference costs are the foundational pillars for delivering AI at population scale across agriculture, health, and education. The panel collectively envisions a coordinated, open‑network ecosystem that can be replicated globally.

High consensus – the alignment across senior leaders from Google, the private sector, the World Bank and academia suggests a shared strategic direction, which bodes well for coordinated policy action, investment in DPI, and multi‑country scaling of AI solutions.

Differences
Different Viewpoints
Primary bottleneck for scaling AI: inference cost vs data infrastructure
Speakers: Nandan Nilekani, Sunil Wadhwani
Need for Dramatically Lower Inference Costs for Scale DPI Provides Data Pipelines & Distribution Channels
Nandan stresses that cheap AI inference is essential, warning that high per-query costs would prevent population-scale deployment and calls for a focus on reducing inference expense [246-248]. Sunil argues that the decisive factor is the availability of Digital Public Infrastructure that supplies data pipelines and distribution channels, without which AI models cannot be deployed at scale [170-174]. Both aim for large-scale impact but prioritize different technical constraints.
POLICY CONTEXT (KNOWLEDGE BASE)
Panel debates identify compute democratization and data-center coordination as the chief obstacles to AI scaling, highlighting inference cost and infrastructure gaps as key constraints [S62][S61].
Role of open standards versus open networks in delivering user‑centric AI services
Speakers: Sangbu Kim, James Manyika
Open Standards Essential for User‑Centric Services Coordination Layer via Open Networks
Sangbu emphasizes that open standards are crucial to ensure affordable, user-centric services in the AI era [105-106]. James highlights open networks as the coordination rail that translates intent into action, without explicitly addressing the need for formal standards [19-21]. The difference reflects a subtle disagreement on which technical foundation is most important for scaling AI services.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of social-good AI emphasize tensions between standardisation for interoperability and contextual flexibility of open networks, reflecting broader governance debates [S64][S56].
Unexpected Differences
None identified
Speakers:
The transcript shows a high degree of consensus among speakers on the overarching goals of open digital infrastructure and AI for population‑scale impact. No clear contradictions or surprising oppositions emerged beyond the nuanced focus differences captured above.
Overall Assessment

The discussion was largely collaborative, with participants unified around the vision of using open digital infrastructure to deliver AI at population scale. The main points of contention revolve around which technical component—low‑cost inference versus robust data pipelines, or open standards versus open networks—should be prioritized to unlock that scale.

Low to moderate disagreement; the differences are strategic rather than ideological, suggesting that coordination among stakeholders can reconcile these perspectives without major conflict, facilitating progress toward the shared objective of scalable, inclusive AI.

Partial Agreements
All participants agree that a digital coordination layer—whether framed as open networks or Digital Public Infrastructure—is essential to achieve population‑scale AI impact across sectors. However, they differ on which element (open network architecture, data pipelines, or distribution channels) should be emphasized as the primary driver [19-21][79-81][170-174][1-3].
Speakers: James Manyika, Nandan Nilekani, Sunil Wadhwani, Moderator
Coordination Layer via Open Networks Open Networks Enable Multitude of Innovators & Agents DPI Provides Data Pipelines & Distribution Channels Population‑scale AI impact requires built‑in coordination
Takeaways
Key takeaways
Open Networks and Digital Public Infrastructure (DPI) act as the essential coordination layer that enables AI to translate intent into real‑world action at population scale. AI functions as a multiplier across agriculture, healthcare, and education when embedded in open, interoperable networks. Multilingual AI agents are critical for removing language barriers and achieving inclusive, mass adoption. The cost of AI inference must be dramatically reduced for large‑scale deployment; low‑cost inference enables plug‑and‑play of new models (e.g., weather, health) across networks. Open, consent‑based data sharing (health stacks, scientific datasets like AlphaFold) fuels risk profiling, insurance models, and universal service delivery. Indian models (UP AgriConnect, TB diagnosis, reading assessment) demonstrate a replicable blueprint that can be scaled to other countries in the Global South. Future breakthroughs lie at the convergence of biological intelligence and artificial intelligence, informing energy‑efficient computation and cell‑level therapeutics.
Resolutions and action items
Google to continue funding open‑network initiatives through the $10 million Google.org grant to the Networks for Humanity Foundation. Scale the multilingual AI agent platform for farmers (AgriConnect) and extend it to health and education sectors. Deploy the TB cough‑sound diagnosis and reading‑assessment AI tools nationwide via existing DPI platforms (NIXA, Rakshak). Target 75 million students and 2 million educators with AI‑enhanced learning experiences by end‑2027. World Bank to work on standardising and replicating the AgriConnect model in Africa, Brazil and the Philippines. Encourage researchers and changemakers to apply for the Google.org Impact Challenges (AI for Science and Government Innovation). Commit to lowering AI inference costs and creating plug‑in mechanisms for updated models (e.g., weather forecasts) within open networks.
Unresolved issues
Concrete mechanisms for achieving ultra‑low inference costs at the scale required for billions of daily queries. Global standard‑setting process for open network protocols that satisfy diverse regulatory and privacy regimes. Strategies to overcome data‑ownership, IP, and privacy concerns that hinder broader data sharing beyond India. Detailed roadmap for adapting Indian‑origin AI solutions to differing agricultural, health, and education contexts in other countries. Sustainable financing models for long‑term operation of AI services once grant funding ends.
Suggested compromises
Balancing open, decentralized network architecture with responsible AI governance and privacy safeguards (as emphasized by Nandan Nilekani and Kiran Mazumdar‑Shaw). Using government‑run DPI as distribution channels while allowing private innovators to build applications on top, ensuring both public control and private innovation.
Thought Provoking Comments
AlphaFold solved the 50‑year grand challenge of protein structure prediction, and the freely available AlphaFold protein database is now used by more than 3 million researchers in over 190 countries – with India the fourth largest adopter.
Demonstrates how open, publicly‑available AI research can create massive, global scientific impact, illustrating the power of shared resources rather than proprietary tools.
Set the opening tone that AI’s greatest value lies in open access; prompted other panelists to reference open data initiatives (e.g., language datasets, health stacks) and framed the discussion around scaling impact through shared infrastructure.
Speaker: James Manyika
AI agents on an open network are the fundamental construct for massive diffusion of technology – they remove complexity for the user, especially when the agent can interact in the user’s own language, making inclusion at massive scale possible.
Links the technical concept of AI agents with the practical challenge of language barriers, positioning open networks as the vehicle for inclusive, large‑scale adoption.
Shifted the conversation toward multilingual accessibility; led James to highlight language initiatives and spurred Kiran and Sunil to discuss how language‑aware AI can be embedded in health and education services.
Speaker: Nandan Nilekani
India is building a health stack that aggregates phenotypic, genomic, demographic, radiological, and treatment outcome data. With consent‑based, secure sharing (like UPI), AI can risk‑profile populations, integrate insurance, and empower ASHA workers, moving toward universal, preventive healthcare.
Introduces a concrete vision of a nation‑wide, interoperable health data ecosystem and shows how AI can transform preventive care and insurance models, extending the open‑network concept to health.
Expanded the discussion from agriculture to health, prompting Sunil to give concrete DPI‑enabled health examples (TB detection) and reinforcing the need for data pipelines and consent frameworks.
Speaker: Kiran Mazumdar‑Shaw
Biology operates as distributed data centers using sips of energy, with generational learning encoded in DNA. AI should learn from this to achieve energy‑efficient, multimodal intelligence; the convergence of biological and artificial intelligence will be transformational.
Provides a deep, cross‑disciplinary insight linking biological principles to AI design, suggesting a paradigm shift in how AI systems could be built.
Introduced a higher‑level conceptual layer, prompting James to reference earlier discussions on AI’s role in biology and encouraging the panel to consider long‑term research directions beyond immediate applications.
Speaker: Kiran Mazumdar‑Shaw
Digital Public Infrastructure (DPI) provides two key benefits: (1) data and data pipelines essential for training AI models, and (2) distribution channels that let inference reach billions at low cost. Without DPI, scaling AI in the social sector would be prohibitively expensive.
Articulates the foundational role of government‑backed digital infrastructure in making AI scalable and affordable, moving the conversation from technology to systemic enablers.
Reinforced earlier points about open networks, gave a concrete framework that other speakers referenced (e.g., Nandan’s inference cost, Sangbu’s cross‑sector scaling), and set up the segue into real‑world case studies.
Speaker: Sunil Wadhwani
Using a smartphone to record the sound of a cough, our AI model can diagnose TB instantly, increasing case detection by 25 % nationally; we also automate lab results and predict treatment non‑adherence, all powered by the NIXA DPI platform.
Provides a vivid, low‑cost, high‑impact example of AI in public health, illustrating how DPI enables rapid deployment and measurable outcomes.
Served as a turning point that moved the discussion from abstract ideas to tangible results, prompting applause and encouraging other panelists to envision similar deployments in education and agriculture.
Speaker: Sunil Wadhwani
The cost of AI inference must drop dramatically for population‑scale impact. Open networks let us plug in better models—like Google’s improved weather forecasts—so millions of farmers instantly benefit without each having to run expensive computations themselves.
Highlights a critical technical bottleneck (inference cost) and shows how open network architecture solves it, bridging the gap between model development and real‑world usage.
Steered the conversation toward scalability challenges, leading James to discuss weather model integration and prompting Sangbu to consider how similar plug‑and‑play approaches could be replicated across sectors and countries.
Speaker: Nandan Nilekani
AgriConnect is designed as an open‑stack, open‑network platform that can be extended beyond agriculture to health and education, creating a universal network for the AI era.
Broadens the scope of a sector‑specific initiative to a cross‑domain framework, emphasizing the versatility of open infrastructure.
Encouraged the panel to think about scalability across domains and geographies, leading to discussions about replicating models in Africa, Brazil, and the Philippines, and reinforcing the theme of universal, interoperable AI services.
Speaker: Sangbu Kim
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the conversation from high‑level optimism about AI to concrete mechanisms for achieving population‑scale impact. James’s opening example of AlphaFold established the value of open, shared AI resources. Nandan’s articulation of AI agents and inference cost framed the technical and inclusion challenges, while Sunil’s DPI explanation and TB case study grounded the dialogue in practical, low‑cost deployments. Kiran’s health‑stack vision and biological‑intelligence analogy expanded the scope into new domains and offered a deeper, interdisciplinary perspective. Sangbu’s emphasis on a universal network tied these strands together, showing how sector‑specific successes can be replicated globally. Collectively, these comments shifted the tone from abstract promise to actionable infrastructure, highlighted the necessity of open, interoperable platforms, and underscored the role of government‑backed digital public infrastructure in turning AI potential into real‑world, equitable outcomes.

Follow-up Questions
What global standards are needed to scale local AI solutions (e.g., AgriConnect) across different countries?
Understanding interoperable standards is essential for replicating successful pilots like AgriConnect in diverse regulatory and technical environments.
Speaker: James Manyika
How can the cost of AI inference be dramatically reduced to enable affordable, population‑scale services?
High per‑query inference costs could prevent widespread adoption; research is needed on efficient model architectures, hardware, and pricing models.
Speaker: Nandan Nilekani
What frameworks are required to apply India’s consent‑based, secure data‑sharing model (used in UPI) to the health data stack?
Extending proven data‑sharing mechanisms to health data could unlock risk profiling and insurance integration while protecting privacy.
Speaker: Kiran Mazumdar‑Shaw
Which components of the AgriConnect model are critical for replication in other regions, and how can they be adapted to local contexts?
Identifying the minimal, scalable elements will help the World Bank and partners transfer the solution to Africa, Brazil, the Philippines, etc.
Speaker: Sang‑Boo Kim
How can the global reluctance to share data—especially outside India—be overcome to avoid siloed AI development?
Data silos limit AI impact; research into incentives, governance, and trust mechanisms is needed to promote open data sharing internationally.
Speaker: Kiran Mazumdar‑Shaw
How can AI‑driven risk profiling be integrated with insurance products to create sustainable, universal health‑care delivery models?
Combining predictive health analytics with insurance could improve coverage and outcomes, but requires new actuarial and regulatory approaches.
Speaker: Kiran Mazumdar‑Shaw
What AI techniques can be developed to reprogram cells, convert malignant cells to non‑malignant ones, and advance regenerative medicine?
Leveraging AI for cellular engineering promises breakthroughs in cancer treatment and longevity, demanding interdisciplinary research.
Speaker: Kiran Mazumdar‑Shaw
How can AI agents reliably understand and process multilingual, code‑mixed language inputs common in India?
Effective language handling is crucial for inclusive AI agents that interact with users in their native linguistic blends.
Speaker: Nandan Nilekani
What metrics and evaluation frameworks should be used to assess the impact, cost‑effectiveness, and scalability of AI‑based reading diagnostics for early‑grade students?
Quantifying educational outcomes and financial sustainability is needed to justify large‑scale rollout to millions of children.
Speaker: Sunil Wadhwani
How can open, interoperable Digital Public Infrastructure (DPI) be designed to serve as universal distribution channels for AI models in health, education, and agriculture?
Creating DPI that seamlessly deliver AI inference at scale requires standards for data pipelines, authentication, and integration with government systems.
Speaker: Sunil Wadhwani
What security and governance models ensure that open, decentralized AI networks remain trustworthy while preserving user privacy?
Balancing openness with security is vital for public adoption and for preventing misuse of AI‑enabled services.
Speaker: James Manyika
How can AI be used to enable predictive and preventive medicine at a population level, shifting from hospital‑centric to community‑centric care?
Research is needed to translate AI‑driven risk predictions into actionable community health interventions.
Speaker: Kiran Mazumdar‑Shaw
What policies and technical solutions are required to bridge the AI divide and ensure equitable access to AI benefits across socioeconomic groups?
Addressing inequities is essential for realizing AI’s potential for societal transformation and avoiding new forms of digital exclusion.
Speaker: James Manyika

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI 2.0 The Future of Learning in India

AI 2.0 The Future of Learning in India

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Speaker 1 announcing a joint Centre for Policy Research and Governance (CPRG) and Future of Society initiative that has already produced reports on AI in higher education and is now releasing a new study on AI use in school education [1-9].


Pranav Kothari presented the survey, noting that roughly half of private-school students in Delhi use generative-AI tools multiple times a week, mainly for information search and writing assistance, while usage for structured tasks such as calculations remains low [25-27][32].


Students perceive AI as helpful for both school and entrance exam preparation, yet they also report frequent hallucinations and lower accuracy for logical or numerical subjects [39-41][45-48].


Despite a strong preference for traditional resources like YouTube over AI platforms, respondents view AI as a supplementary aid rather than a replacement for human teachers [50-52][56-58].


Professor KK Aggarwal emphasized that AI adoption is outpacing earlier IT adoption and warned that AI must augment creativity without becoming a shortcut that erodes thinking skills [78-84].


A senior commentator highlighted AI as a 360-degree paradigm shift, arguing that institutions-not nations-will determine global competitiveness and that India must reimagine its education system for the long term [97-104][108-115][118-124].


Pankaj Arora described a structural shift in knowledge production, citing rapid, technology-driven curriculum revisions and asserting that AI should function as an assistant requiring human supervision and ethical oversight [156-166][170-176][178-184][190-199].


Patil outlined the massive scale-up of AI usage compared with past technologies, pointing to infrastructure gaps such as limited ICT resources in many schools and the need to train millions of teachers in AI literacy [214-224][230-236][240-250].


He also noted that AI curricula are being introduced from grade three to demystify the technology and that AI tools are already being used to translate local languages and monitor dropout rates, illustrating both potential and the risk of misuse [232-239][260-274].


Aditi Nanda from Intel described industry collaborations that provide AI-enabled translation, offline tutoring devices, and partnerships with startups to create localized, low-hallucination content for K-12 learners [285-340][342-350][354-360].


Suresh Yadav called for a shift from a consumption-driven to a creation-driven nation, proposing that universities become problem-solving hubs and that the three education tiers be interconnected through technology [386-401].


Patil and Pankaj further proposed AI-oriented regulatory frameworks where 70-80 % of assessments are automated, stressing the importance of research ethics, Indian language support, and AI-driven mentorship programs [404-412][415-422].


The panel concluded that AI will be an indispensable, but carefully governed, component of India’s future education ecosystem, requiring integrated curricula, ethical safeguards, and coordinated effort across schools, higher education, industry, and government to realize the vision of a “Vixit Bharat 2047” [210-214][463-465].


Keypoints


Major discussion points


AI usage among school-age students is already widespread but uneven.


The survey in Delhi shows that about half of private-school students use generative-AI tools multiple times a week, mainly for information search and writing assistance, and they perceive AI as helpful for exam preparation. However, students report frequent “hallucinations,” lower accuracy for logical or numerical tasks, and a strong preference for traditional human teachers over AI tutors.  [24-27][28-33][39-46][46-48][50-57]


Integrating AI into India’s education system faces massive infrastructural and equity challenges.


Panelists highlighted the digital-divide across urban, rural and tribal schools, the limited ICT resources in many institutions, and the need to up-skill millions of teachers. They cited rapid adoption curves (e.g., ChatGPT reaching 5 crore users in 40 days) contrasted with the slow rollout of computers in schools, stressing that AI-driven reforms must address access, language barriers, and ethical use. [214-222][224-236][239-247][254-262][273-276]


Re-imagining curricula and assessment: AI as a supplement, not a replacement.


Speakers argued that AI should enhance creativity, mentorship and adaptive learning while preserving human judgment. Proposals included AI-driven curriculum revision, AI-assisted assessment (70-80 % AI-based evaluation for teacher-education regulators), and the development of Indian-language, culturally-relevant AI content to avoid over-reliance on Western models. [78-84][156-170][176-190][404-410][420-424]


Strategic national vision: positioning India as a global AI leader through education.


The discussion linked AI adoption to India’s long-term economic ambitions (e.g., surpassing $70 trillion GDP per-capita standards) and geopolitical standing, emphasizing that world-class universities and AI-centric policies are essential for “AI leadership” and for achieving the “Vixit Bharat 2047” goal. [97-104][108-115][118-124][129-136][138-144][140-144]


Overall purpose / goal


The session was convened to launch the CPRG “AI in School Education” report, present its key findings, and use the report as a springboard for a broader dialogue on how AI will reshape learning, skill development, and institutional design in India. The participants aimed to identify challenges, propose policy and curriculum reforms, and articulate a collective vision for an AI-enabled education ecosystem that can drive national competitiveness.


Tone of the discussion


– The opening remarks and Pranav’s presentation were formal and data-driven, focusing on survey results.


– As the panel progressed, the tone became analytical and cautionary, highlighting digital-divide, accuracy issues, and ethical concerns.


– Later contributions (e.g., Professor Aggarwal, Ramanan, Patil, and the Intel representative) shifted to an optimistic and visionary tone, emphasizing opportunities, national ambition, and transformative reforms.


Overall, the conversation remained constructive and collaborative, moving from factual reporting to strategic aspiration while consistently acknowledging the risks and required safeguards.


Speakers

Speaker 1 – Moderator/host (role not explicitly stated in the transcript).


Pranav Kothari – Presenter / researcher (role not explicitly stated).


Professor KK Aggarwal – President, South Asian University; former Vice-Chancellor who developed Indraprastha University; expertise in IT and higher-education development [S10][S11].


Pankaj Sir – Chairperson, National Council of Teacher Education (NCTE); former Head and Dean, University of Delhi; expertise in curriculum development and teacher education [S4].


Suresh Sir – Executive Director, Commonwealth Secretariat (as introduced in the panel).


Patil Sir – Administration Secretary, School Education (also involved in higher-education initiatives).


Speaker 2 – Senior official / moderator (specific title not mentioned).


Speaker 3 – Industry representative (speaks about Intel’s work and ecosystem collaborations).


Additional speakers:


Aditi Nanda – Director, Education and Industry, Intel (introduced as a panelist).


Dr. Namanan (Ramananji) – Addressed by Speaker 3; likely a senior official/moderator (title not specified).


Full session reportComprehensive analysis and detailed insights

Opening remarks – Speaker 1 opened the session, welcomed participants and announced a joint initiative between the Centre for Policy Research and Governance (CPRG) and the “Future of Society” project. The partnership has already released a report on AI use in higher education and is now unveiling the AI-in-School Education report [1-9]. He noted rising public anxiety that current skill sets may become obsolete as emerging technologies reshape jobs, positioning the new report as a response to these concerns [10-14].


Survey findings – Mr Pranav Kothari presented the key results of the “AI in School Education” survey, which was conducted in Delhi by interviewing students from private schools [17-24]. Approximately half of the respondents use generative-AI tools such as ChatGPT or Gemini multiple times a week [25-27]. Their main uses are searching for academic information and obtaining writing assistance; science students employ AI more for concept learning than for structured calculations, where accuracy remains low [32]. Students view AI as helpful for both school-level and entrance-level exam preparation [39-41], but they also report frequent “hallucinations” and reduced reliability for logical or numerical tasks [45-48].


Aggarwal’s perspective – Prof KK Aggarwal was asked, “While working with President Mukherjee you have introduced a lot of technological tools…?” [78-84]. In his response he highlighted the speed of AI adoption, noting that it outpaces the earlier IT wave. He cautioned that AI should supplement creativity and must not become a shortcut that diminishes thinking skills [78-84]. He stressed that academia’s challenge is to ensure AI enhances, rather than replaces, human cognition.


Speaker 2’s 360° paradigm shift – Speaker 2 framed AI as a 360° paradigm shift that will determine whether institutions become “fossilised” or emerge as global leaders [97-100]. He linked this transformation to India’s long-term economic ambitions, arguing that world-class universities and AI-centric policies are crucial for achieving a $70 trillion-per-capita GDP vision and for securing geopolitical influence [108-115][118-124]. He also highlighted AI’s capacity to dismantle language barriers, thereby expanding educational access across the country [138-144][140-144].


Arora on structural change – Prof Pankaj Arora described a structural and epistemic shift in knowledge production, citing rapid, technology-driven curriculum revisions completed without traditional meetings or large budgets [156-170]. He positioned AI as an “assistant” that requires human supervision, arguing that teachers should evolve into mentors, ethical guides, and designers of learning experiences [176-184][190-199]. Arora proposed an AI-oriented regulator for teacher education, where 70-80 % of assessments would be AI-based, and called for the development of Indian-language, culturally-relevant AI content to avoid over-reliance on Western models [404-410][420-424].


Patil on scale-up and infrastructure – Mr Andrao B. Patil highlighted the massive scale-up of AI usage, comparing ChatGPT’s 5 crore users in 40 days with the decades-long diffusion of earlier technologies [214-219]. He noted that only a fraction of Indian schools possess adequate ICT infrastructure (≈ 4 lakh of 15 lakh have computers or tablets) [221-224] and that millions of teachers lack AI literacy [221-227]. Patil announced that AI curricula are being introduced from Grade 3 to demystify the technology [232-239]. He cited pilot projects using AI for real-time language translation and dropout monitoring [254-262], and warned that treating AI as a human-like entity or misusing it could create mental-health stress for students [273-276]. He also mentioned that the Wadhani Foundation has started an AI school in one of the IITs[215-218] and thanked Sarvam for its support [219-221]. Patil emphasized the establishment of an AI-Centre of Excellence (AI-COE) at IIT Madras[214-219] and quoted a report stating that “one year of schooling yields a 24 % increase in labour output” [224-226]. Both he and Prof Arora used the exact phrase “Vixit Bharat 2047” to describe the long-term vision [156-170][210-214].


Industry-academia collaborations – Ms Aditi Nanda (Intel) described partnerships that deliver AI-enabled translation, offline tutoring devices, and collaborations with startups to create locally relevant, low-hallucination content for K-12 learners [285-340]. She showcased a device that runs AI entirely on-device, providing voice-to-voice translation without internet connectivity, and cited Dr Kamakoti’s Tamil-to-11-language translation tool at IIT Madras as a concrete example of multilingual AI deployment [332-337]. Intel’s programmes such as Unnati and “Future for Workforce” also offer AI curricula, internships, and AI-powered teaching tools that bridge industry needs and classroom practice [321-328].


Yadav’s national vision – Dr Suresh Yadav expanded the discussion to a creation-driven economy, urging universities to become problem-solving hubs rather than mere degree-granting bodies. He called for seamless integration of primary, secondary, and tertiary education through technology [386-401]. This aligns with the broader goal of “Vixit Bharat 2047,” where AI serves as the spine of the education system and a catalyst for exponential economic growth [210-214][463-465].


Consensus – Across the panel there was clear agreement that AI transformation demands a fundamental redesign of institutional structures, curricula and governance [1-13][78-84][97-100][197-199]. All speakers agreed that teachers must remain central as mentors and ethical guides while AI functions as an augmentative tool [78-84][176-184][232-239]. The need to build AI literacy among teachers and to bridge the digital-infrastructure divide was repeatedly stressed [221-227][192-195]. Participants also concurred that AI can dismantle language barriers, with both public-sector labs and private-sector devices offering multilingual translation [138-144][292-309][332-337].


Key disagreements


1. Regulatory automation vs. reliability – Arora’s proposal for an AI-driven regulator that would automate 70-80 % of assessments [404-408] conflicted with Kothari’s evidence of frequent hallucinations and low accuracy in logical/numerical tasks [45-48] and with Aggarwal’s caution that over-reliance on AI could shortcut creative thinking [78-84].


2. Pace of rollout – Patil emphasized a “quantum jump” in AI adoption and called for rapid scaling of AI-COEs [214-219][239-244], whereas Kothari warned that current tools still suffer from reliability issues [46-48] and that equity gaps remain stark [224-236].


3. Governance model – While Arora advocated a highly automated regulator [404-408], Speaker 1 and Aggarwal called for broader, human-centric, multi-stakeholder oversight to preserve creativity and ethical standards [10-13][78-84].


Key take-aways


– AI use among private-school students in Delhi is high (≈ 50 %) and focused on information search and writing assistance.


– Students find AI helpful for exam preparation but experience hallucinations and lower accuracy in logical subjects.


– AI should complement, not replace, teachers, who must act as mentors and ethical guides.


– AI represents a 360° shift that will fossilise institutions that do not adapt.


– India must aim for global AI leadership, leveraging AI to break language barriers and expand access.


– Higher-education reform must move beyond incremental tweaks to skill-based, problem-solving curricula powered by AI.


(These points are drawn from the detailed discussions above [25-27][39-41][45-48][56-58][78-84][97-100][156-170][214-219][285-340][386-401].)


Proposed action items


– Publish and disseminate the AI-in-School Education report.


– Establish an AI-oriented regulator for teacher education with a phased increase in AI-based assessment.


– Scale national programmes such as NPST and NMM that use AI for mentor-mentee matching.


– Introduce AI curricula from Grade 3 onward to build early AI literacy.


– Launch widespread teacher-training programmes on AI tools.


– Create AI Centres of Excellence and MOUs with technology firms (e.g., IIT Madras, Intel) to provide devices and internships.


– Integrate school and university ecosystems through outreach programmes (e.g., COEP’s 100-school engagement).


– Develop offline, on-device AI solutions to mitigate connectivity constraints and hallucination risks.


– Embed Indian languages and cultural content into AI models to ensure relevance and ethical use [404-410][420-424][214-219][232-239][321-328][386-401].


Unresolved issues – How to reliably mitigate AI hallucinations; how to fund and implement infrastructure upgrades in rural and tribal schools; how to design standardised adaptive-learning frameworks that outperform existing YouTube/ICT resources; how to ensure ethical oversight and data-privacy in AI-driven assessment; and how to coordinate a coherent governance model that balances automation with human oversight. Addressing these questions will be essential for realising the vision of an AI-enabled, inclusive, and globally competitive Indian education system.


Conclusion – The panel concluded that AI will be an indispensable, yet carefully governed, component of India’s future education ecosystem. Realising the “Vixit Bharat 2047” ambition will require integrated curricula, ethical safeguards, coordinated multi-stakeholder effort, and sustained investment in both technology and human capacity [210-214][463-465].


Session transcriptComplete transcript of the session
Speaker 1

Thank you everyone for joining this session. Before we start the session, I would like to tell you about the joint initiative of CPRG and Future of Society. The Centre of Policy Research and Governance is a policy think tank that is continuously researching policy and governance issues in different fields. Two years ago, the Emerging Technology Centre was established by the International Cooperation Centre for the Development of Technology and the Relation of Technology and Society. We have developed a Future of Society project to study the relationship between technology a centre developed here. Under Future of Society, we are continuously working on the various sector, producing report, doing a lot of stakeholder consultation. In this light, just one year before, we have published one report usage of AI in higher education.

Now, we have just launched going to release one more report, usage of AI in school education. In next month, we are going again going to launch report, Future of Job. There is a lot of fear, and this fear is not just outside it is also coming in people’s minds. Whether their acquired skill will survive in the next 5 or 10 years or not, as emerging technologies are coming. Along with this, there is also a fear that it will not happen and the type of tool that is being developed, human skills or human mind will become irrelevant. By keeping all these things in mind, what kind of transformation is happening, what kind of future skills, what kind of future jobs are coming, and they are going to be transformed, we are going to launch a report on that.

But that is in the next month. But the report that we are going to launch now, that is AI in school education, and to launch that, I call all my guests and Mr. Pranav to the stage. Thank you.

Pranav Kothari

Now we have a short presentation with some salient findings from our study. So AI in school education, this is a survey report that we have conducted late last year as part of our ongoing internal activities on mapping AI usage among students in India and various sectors in India. So over the past year, CPRG has now released two reports on AI adoption in education. So last year, we released a report on AI adoption in higher education. This was the first ever survey -based report in India on mapping everyday AI use among college students. Today, now we are launching our new report on AI adoption in school education. Both studies have been conducted in Delhi, where we have actually gone to students, interviewed them to understand what are they using AI for, how often they are using AI for, and what are various challenges and opinion on usage of AI.

So firstly, if we just compare our broad findings, what we find is that AI use among school students remains relatively high, though marginally lower than what we found among college students within the same city because both studies were conducted in Delhi. Yet, what we find is that nearly 50 % of students, and these are, of course, these are students from private schools in Delhi, that was our limited sample, almost 50 % of them use AI based tools. These could be generative AI platforms or other AI tools multiple times a week. What are patterns of AI or edtech use as per academic stream? So what we’re finding is that while AI use, especially generative AI platforms such as strategy, GPT, Gemini remains relatively high.

What this is also leading to is also leading to some sort of a challenge to traditional methods of learning. And edtech platforms that have become extremely prominent and widely used over the past few years. Then what are students using AI for? So apart from asking how often are students using AI, we also try to delve into what are they using AI for and what we find in our study is that AI use is essentially concentrated for generally searching for new academy for academic information while studying or writing assistance and this of course varies across streams because some students may be more into more engaged in practice solving, question solving and AI use depends on depends on usage but however what we find is that among science students for instance while there’s high AI usage for learning concepts there is very limited usage for structured tasks like calculations or calculations or solving questions because that is where various AI platforms still have relatively low accuracy.

Now what is perceived helpfulness of AI for school exams and exams? So what is perceived helpfulness of AI for school exams and exams? So what is perceived school exams and exams? So what is perceived helpfulness of AI for school exams and exams? So what is perceived helpfulness of AI for school exams and exams? So what is perceived helpfulness of AI for school exams and exams? So what is perceived helpfulness of AI for school There is relatively high perceived helpfulness of AI platforms for both studying for school exams and entrance exams. While especially for entrance exams, students who are in the science team are more likely to prepare for entrance exams are still more dependent on offline classes or edtech platforms.

Yet the level at which we are seeing perceived AI helpfulness, it means that there is an emerging challenge that is coming to edtech platforms through free usage of generative AI platforms. AI support in learning and performance. So how do students rate AI -based platforms or AI -based tools in terms of their actual impact? And what we find is that apart from, of course, learning complex topics, improving their time management, there is a substantial proportion of students who are actually attributing, improving their academic performance to use of AI platforms. At the same time, students report issues with accuracy and challenges in AI use. One of the major challenges with respect to AI use is that students, a significant proportion of students regularly encounter AI hallucination or are able to identify that they are getting incorrect information.

Then secondly, as I mentioned, when it comes to accuracy for logical or numerical subjects, there is relatively lower reported accuracy. Again, this is something that various platforms are still working on in terms of trying to improve their performance and accuracy. Next is apart from their overall planning and understanding overall AI uses, we also try to compare AI platforms and their performance. with other tools. So what we did was we asked students, number one, is our AI platforms better than YouTube or ICT based learning? And there what we find is that there’s still overwhelming support for YouTube video or ICT based learning tools. Secondly, there’s a whole question of adaptive learning and AI addressing individual needs.

Here, there is an overwhelming evaluation by students that while AI might tools might be helpful, they are not necessarily providing solutions that are specific to their needs. And this, of course, might be because of the nature of AI tools that students are using, which is in most cases free models of generative AI platforms, as opposed to specific AI tools that are actually able to undertake adaptive learning. And then finally, we tried to ask the we tried to ask about AI versus human interaction. So why? So the idea of AI tutors or AI based learning tools replacing in -person teaching, there again, there’s an overwhelming support to the essentially overwhelming support for the idea that students still prefer.

traditional human interaction based learning. So what we’re finding in our study is that while AI is definitely emerging as inter -AI use is definitely increasing significantly among students, it is still considered as a supplementary tool as opposed to a replacement or substitute for traditional teaching. So these were some of the findings. We have more detailed findings in our report and at the end I would just like to thank our team that worked on this report. I would like to thank Nitin, Mehta and Ms. Suchitra Tripathi for their guidance and oversee of this research and I would like to thank our team members Gauri, Shreya, Anupriya, Rashi, Mika and Shugal for their active involvement and participation in the study.

Thank you so much.

Speaker 1

Thank you Pranav ji for the presentation. Today as a panelists now we have Professor KK Agarwal sir. President South Asian University We have Professor Pankaj Arora Sir Chairperson of National Council of Teacher Education Suresh Yadav Sir Executive Director, Commonwealth Secretary Andrao B. Patil Sir Adolescent Secretary, Higher Education And we have Aditi Nanda Director, Education and Industry, Intel And Agrawal Sir You have seen the transformation during IT movement And if I can align it correctly At that time you had developed Interpress University And maybe because at that time IT was also in the process of developing a new institution So you have seen the transformation during IT movement So when you are developing an institution At that time it must be happening in your mind For the how you know i .t is going to challenge those you know kind of uh traditional or conservative approach of you know institutions now again you are the president of south asian university it’s you know one of the iconic institution in india and again you are facing new challenge you know from the ai so how you are you know how you are finding this ai is different from the i you know past i .t because in your lifetime you have seen two movement first i .t now ai and at the same time you are developing two new institutions because before you saw was you not in that position but now so is leading so how you are finding

Professor KK Aggarwal

thank you for the question yes in a way when i was asked to develop the very first university Delhi, Indraprastha University, and it was a challenge because it was the first university in the country, and your very right IT movement was also in the offing. It probably happened by coincidence that the vice -chancellor, which is me, which was appointed at that time, belonged to the discipline of IT. This was probably never a calculation, but it happened. But it happened for the good of the country and the university, I believe, because you could get two in one kind of person to develop. So we made sure that right from the beginning, IT is… That was the time when, if you remember, I saw the students in Delhi.

Incidentally, I think this was the first university in Delhi for the students after Delhi University, who was an affiliated university. So I was seeing the student go to the Delhi University colleges. They had not said this before. with the employment and in the evening they go to a tech company and do a course there. Now that was very much disturbing to me why the students should feel not very satisfied at the end of the formal school or formal college and then try to do that. So my first thing was let’s combine the two. So our curriculum itself should integrate both. If the students have a job in IT sector, why should we not realize this and make sure that every subject is more IT saving and so on and so forth.

Now when I am here the challenge obviously as you say is AI. AI is fortunately being adopted by the youngsters even faster which was expected. IT was also adopted by them faster than the elders. AI is being adopted much faster than elders. Only thing which one has to see is as I said in the whole process of using AI let’s make sure it supplements our creativity it does not give us a shortcut to creativity and thereby reduce our thinking powers. That is a challenge which we have to face in academics. Short of that it is a good opportunity for all of us.

Speaker 1

While working with President Mukherjee you have introduced a lot of technological tools and a lot of innovation not only in the field of finance ministry but as an advisor of President you have introduced a lot of educational innovation as well. And I think that was before time of 2014 and 2015. After the COVID -19 the educational institution has been changed and it is getting changed very fast. How you will analyze and how we will assess this kind of change and what will you suggest to education institution and to the head of the institution to address those challenges posed by AI and other emerging technology.

Speaker 2

Thank you very much and first of all a big congratulations on this fantastic report which talks about the AI in school education and also your previous reports which talks about AI and I think it’s a very good documentation to understand where we stand as a society as a country, as an institution in the emerging landscape. COVID changed Ramananji drastically the way the world looks at the various ways of doing the things. I mean, going to the office was normal. Now, not going to the office is normal. So there is a fundamental shift. It’s very difficult to get the people back to office. And the argument is that if I can do my job better while sitting in my home, why do you want me to come to the office?

So these are the fundamental shifts which we have witnessed post -COVID. And then if you look at the artificial intelligence, it’s a paradigm shift. It’s not only 180 degree. It’s a 360 degree shift. We don’t know which direction and what direction we are going. Any organization, any society, any institution which is not live and kicking to this new emerging reality will be fossilized. Remember, we have in 180 controlling. The almost one -third GDP of the world. And it was not the country which was leading. It was the institutions. It was the institutions of that time which were producing the skill, which can produce the goods and services and the material, which can dominate the world. So it was the role of the institutions.

Of course, the government has now tried to recreate Nalanda, which is coming out very well. So the point I’m trying to emphasize is that the role of educational institutions is of paramount importance. No institutions can dominate the world. No country can dominate the world. Unless the institutions dominate the world. If you look today, the U .S. is dominating the world not because of the military power, but because of the higher education system. If you look at China, the Chinese universities are coming on the top. The number of research in the field of computer science, AI, machine learning, computer vision is dwarfing the research being done in the United States now. So that’s the level of the ship.

So when I’m talking about your topic. reimagining the education system in India, I’m not talking of today, I’m talking of India of 2050, India of 2100. And one thing I keep saying that India, a lot of people say it’s a $5 trillion economy. They are very happy that we are the third largest in PPP, fourth largest in the other term. But I’m not happy because India, as of now, of 1 .5 billion people, if you look at the European standard of GDP per capita, we should be more than $70 trillion. If you look at the American standards of GDP, we should be more than $150 trillion, more than the size of the world economy. So that is the level, that is where we have to think that what kind of institutions we need, what kind of infrastructure we need, what kind of history we need.

Is it the degree, the undergrad degree, master’s degree, PhD’s degree? I got all the degrees. I studied in India from IIT, Indian School of Business. I studied in US, UK, Germany, Sweden, everywhere I have just to educate myself that how the things are different, what are the fundamental differences. So that is something which we have to realize and not do the reforms. This is not the time for doing the reforms in the higher education system. It’s like reimagining. You see, what we reimagine India in terms of digital India, we are getting the dividend. We are a country which is entirely on different level, generating billions of transactions on the digital UPI system, which was unheard.

So similarly, we need a higher education system. We need a general education system which can give an exponential bump to India’s story. And that’s not going to be the normal system. It’s going to be something very, very different. And that is going to be based on the foundation of the technologies. We have been talking that this is the first time in the history of India, though it has been tried several times in the past, to link the north and south. Language barriers always exist. It’s very difficult to do it. but AI dismantles the barrier I was in my village we set up AI lab, we set up AI shop and my message to the villagers you can speak in your Bhojpuri to US, to Russia to Japan so that is the first time a fundamental shift in connectivity is happening around the world and India being a young nation a country of young people almost 44 million students in the higher education ecosystem almost running parallel to China we have that power and potential to change and the moment we are able to use this technology I am sure that we will realize the potential so I say in terms of potential I say I am number one economy India is number one economy not third or fourth so that is the mindset because I have to reach to my potential and I will reach the potential only when I know my potential what it exists so there is a huge responsibility of the Indians of the present generation not only for themselves but the Indians of 2100 Indians of 2050 And if we are not able to capitalize, this AI boom will be left behind.

If you see the geopolitics around the world, we say it’s a new war and all, but it’s the technology war. It’s the AI war. Countries are understanding that those who will dominate AI, they will dominate the world for the next century. So we have to love it. We have no option as a nation. And the education system, which is one of the biggest in the world, will have a very catalytic role in realizing that dream of India of 2100. Thank you and over to you, Ramananji.

Speaker 1

Pankaj sir, as a head and dean, you have changed the curriculum of University of Delhi. You have also introduced a lot. You have introduced a lot of skill -based course during your time and make it outcome -oriented. But the AI challenge is new. And now as a chairperson of NCT, you also see a lot of diversity among the institutions, from the Jhabua to Delhi, and, you know, it’s a multi -layer system. And as a chairperson of NCT, how will you introduce, kind of, you know, ensure that all the institutions can respond in the same manner to the challenge of AI? Because there is a lot of diversity in India. And there is a lot of diversity, you know, about having those kind of resources.

Because AI also needs a lot of resources, not only in a financial term, but in the term of technology and kind of having electricity and other things. So how do you see and how will you ensure?

Pankaj Sir

then we must say that structural and epistemic shift is not merely technological. It is fundamental change that how knowledge is produced, assessed, and evaluated in the day -to -day life of a student. If we look at teacher education, yes, in CI, during my headship, we brought new programs. We revised all the curriculums of BH, MAD, ITEP. And during those changes, our focus was to meet the expectations of young learners in 21st century. Young learner is into technology throughout. When I was doing my college, those days, computers came to the world. And we were very scared of computers. We were told that unemployment will increase because one computer will be… work in place of four or five people.

So as a young student, we protested. against this technology. But today, reality is different. Computer is giving us multiple new avenues of employment in our daily life. So when you revise curriculum, two things I would like to mention here. One, curriculum revision exercise at University of Delhi took place in 2019. And this entire exercise was techno -based. We did it through dashboard system without human intervention, intervention, without having formal meetings and budget of lakhs of rupees to meet, to eat, to TA, to DA, and everything. So zero penny was spent when 72 programs were revised for LOCF curriculum framework. And then in CI, when we took up this exercise, again I followed the same model. Techno -oriented, technology -supported revision took place.

In a record period of two months, we revised almost all the courses in education at University of Delhi. now if we look at role of a teacher what type of teacher we need to meet future generation in my family I have teachers who are dealing with class 3 students class 7 students and senior secondary classes as well as university teaching they all are saying AI is posing a threat to cognitive development of the learner yes it is posing a threat but at the same time we must realize that AI is not going to replace teachers teachers are always there and here I say they both complement each other no challenge no competition between two they complement because a teacher after the use of AI based technology or video or some other context a teacher is the person who can create sensitivity sensitivity in the class related to the topic as well as allow diverse opinions on the same topic So AI can assist.

AI cannot be a master. It is an assistant. If we use it for ethical reasoning, if we use it for creativity, collaboration, adaptability, I see teachers will increasingly function as mentors and learning designers, not learning followers. And ethical guides and facilitators of inquiry in a classroom situation as well as in writing textbooks and developing curriculum. AI -based output demands AI supervision. AI supervision, I mean, AI cannot be left free to design any curriculum. We need to supervise it. When I say we all know difference between governance and leadership. Governance, I call, like, governance means compliance manager. If whatever is coming to you, you are implementing it, you know. Organization, whether it is a college, university, or any other organization.

And if you are an academic leader, then you make a change in that compliance. Compliance will take place because governance is essential. But at the same time, you bring change according to the needs of your institution, needs of your students, needs of your financial resources, etc. Similarly, in education, we must not become AI followers. We should become AI leaders for the time. Yesterday, Honorable Prime Minister said we have tremendous potential to become AI leaders for the world. In those lines, as NCT Chairman, we have brought two new programs, NPST, National Professional Standards for Teachers, and NMM, National Mentoring Mission. Both are designed on a digital platform, on a digital world. And AI is helping us analyzing people’s queries, their questions, their anxiety, and helping them to identify the right mentor for them.

And mentor -mentee is always a guru -shishya context, which is very meaningful and useful. I will close this remark by saying, now we are moving away from treating technology as one -off workshop. Rather than, we should shift towards multi -semester AI spine. AI is spine of entire education system nowadays. And our new program, ITEP, have multiple contexts of AI -based technology. We must transit from product -only evaluation to process -rich evidence of learning. That is more meaningful. In 2012, CBSE brought continuous comprehensive evaluation. Now, AI is helping us to go for process -rich evidence in learning. Risk landscape is there. Bias, heliconations are there. But uneven access to technology is also a challenge that should be taken into consideration.

My last closing remark is, AI plus education can take us towards Vixit Bharat 2047. AI is not a choice. It is a part of our life and providing us multiple new methods of research, new methods of industrial internship. But education, which is providing culture, language and humanistic approach, both need to work hand in hand for better future, for Vixit Bharat 2047. Thank you.

Speaker 1

Patil sir, as an administration secretary, school education, you have embedded technology and through technology, you have been in our track, not only Nipun, but other platforms have been transformed so much that the focus of the government on learning outcomes has improved a lot. Now, you are in higher education and higher education is a very diverse sector and the same time you know in contrast to school education higher education may up to pass controlling power be a critical to jada hoti school education is subject to some time you know in contract list so that’s why abhi aapka kya vision hai to you know to transform those higher education institution in the age of AI AI kya ek challenge hai lagataar aa raha hai not only for the students but as well administrator as well aur us time mein aap ki planning kar rahe ho ki how will you you know address those issues

Patil Sir

thank you sir thank you so much for giving me the opportunity I would like to ask few of the I’m seeing a lot of students here so can somebody tell me that how much time telephone took to reach to five growth subscriber our users any guesses 30 years good guess anybody else quickly 50 years okay good some more yes here somebody sitting right of the stable 75 years yes so it took five crore people go your telephone my light it took 75 years it took 38 years to reach this radio took 38 years to reach to 5 crore people our charge gpt any guesses germany took for 60 days to do is to the 5 crore people whereas charge a pity to 40 days to reach to 5 crore people so this is the i think there is a quantum jump or whatever you say it is a huge jump and with this it is a big challenge for the educationists and both school and higher education.

I can just read some figures for benefit of you that in world we are having around say mobile users in the world there are 749 crore people whereas India 120 crore people. Internet 600 crore people they are using it in India it is 100 crore. In Google world 580 people 580 crore people are using Google whereas in India it is 80 crore and CharGPT world it is 80 crore this is last month’s data not this month. So around 7 crore people they are using CharGPT in India and 1 crore in Gemini. So around maybe by this time 10 crore people will be using CharGPT and Gemini here. Now the challenges what are coming up I will come to that I am not pessimistic at all but if you see in the education ecosystem as Suresh sir also has told and other speakers also have told.

This is very important to see how what is the this cohort, around 25 crore children are in the school education and 4 .6 crore children are in the higher education around 30 crore we can say now 15 lakh schools are there in India and right now if you see the infrastructure around 4 crore 4 lakh schools only having the computers ICT labs and tablets and other things so it is a huge challenge to take the AI revolution to last mile which is, we are aware as I also told you I worked in school education, now in higher education so we are having integrated approach and we are working on that but we need your help second one if you see in school education around 1 crore teachers are there right now and most of them are women so which is really good change is happening there but how many are AI savvy or AI literate we are working on that and And Sir NCT Chairman Sir has already told on that.

Pankaj Sir has told on that. Now, coming to the different digital divide. Delhi schools, if you say, and the remote area schools, the tribal areas or rural areas, you can see. Madam is also from Bangalore. I last week went there. There is huge development. So the cities, the way they are catching up here is huge. Humongous progress is there. But the rural area and other places, it is a big challenge. Central schools like KVS, NVS, they are doing really good in catching up with the AI, using the AI technologies. Even CBS is coming with the AI curriculum. Whereas in the report also I’ve seen, like Andhra, Assam, Tamil Nadu, and a few other states are using the AI curriculum and AI tools for implementation in the education system.

Whereas other states are here to catch up. so there is little bit divide in this and it will take time for India to catch up but yes all of us are now agreed that yes AI is not going anywhere, AI has to be used, AI is useful and at the same time AI is not enough we should treat AI as a machine not as a human being which is very very important AI if you started taking as a human being then it will be problem, it will be huge mental stress on the students and other users also so we are aware of this that’s why school education has taken very wise decision to introduce AI curriculum in third grade it is not to teach the AI it is to teach what is AI what are the uses of AI and whether it is good or bad so children should know about it which is very very important so coming generation, coming up generation new generation, young generation must learn AI because it is very very useful.

Yesterday as Pankaj sir has told that Prime Minister has told that AI, India has to become hub of AI and yesterday evening, yesterday full day we had the meeting with Spain universities. Today again we are having the meeting with the Spain universities like that lot of meetings are going on MOAs are happening. You may be knowing that IIT Madras has developed one tool where Dr. Kamakoti has spoken in Tamil and it has been translated in 11 languages of India as Suresh sir was also telling that when you speak in Bhojpuri, it can get translated in others. So there is huge potential I have seen from Siksha Lokam, they have shown me that again in Bihar, the villagers, the women, they are talking about dropouts, why I got dropout, why my daughter is getting dropout, what are the issues, they are talking in the local language and AI is actually summarizing in English and other languages.

so they are talking and with it that there is no typing nothing else it is getting summarized classified and as an administrator we can take decisions so AI is a boon if we are using it very properly and AI will become a bane if it is misused or unethically used. As sir you are asking me for the challenges in AI yes there are many challenges what we are doing right now is updating the curriculum we are doing educational governance such as coming up many IITs they brought AI schools in their campuses they are having MOUs with Google, Microsoft and various other places Wadhani Foundation has also started one AI school in one of the IITs.

Lot of investment is going on. We are already started AI COE in education and IIT Madras is hosting that. Lot of work going going on. Lot of work going on. Lot of work going on. Lot of work going going on. Lot of work going on. Lot of work going on. Lot of work going on. Sarvam is also helping us in those initiatives. but yes there is parity there is disparity we need to sort out those issues and AI is not only for the STEM that we understood and we are implementing that way everybody has to understand what is AI and how we can take it forward as Suresh has told about economy I think we both have worked previously in Ministry of Education Ministry of Finance together I got his guidance there so the way he has told you can see it is now we are talking about reimagining the education so whatever you imagine what is your vision you are going to achieve that so we should not limit our vision I think 140 crore population and plus it is coming up it is required to have really big vision but same time necessary skills skills are required and one of the report suggests that if one year of schooling is happening …

the 24 % there is output increase in the labour output actually. Labour can, the output will increase by 24%. And in India we are having these certain issues. If you see what labour force is giving the output in US, what is given in South Africa and what is given in India, there is, really we need to think about it. So year of schooling is very, very important. We are having challenges of dropouts also. Luckily, Vidya Samhita Kendra and other tools we are using to trace the dropouts and bring them in the mainstreaming. You can also see around 5 crore children are dropped out. And various state governments are working on that to bring it down. So European Union, few countries may be having this population of 5 crore.

So challenges in India are more, but much more. But as Madam was also asking me what will be the impact of AI, I think it will be huge impact on us. Next two years we can see the way India is going to change. As again I can say one last example and come back. When I was working in banking, department people said that there is something called payment through the mobiles. And when I was discussing with our CMDs of the banks, those were their CMDs, now it is MDs. And they told me that no, it is not going to work here. And South Africa started there. Airtel itself started it there. And 2016 when DMO has come, we can see the huge impact.

And now in PCI we can see the way it is happening. Around 50 % of digital transactions are happening from India, world’s transactions. There is huge change. I think another two years we can see there is huge change in AI adaptability and using it. But one caution is that AI has to be used as a tool. It has to be used ethically. And it has to be used for the work. For humanity. That is what I can say. Thank you so much. And we are getting prepared for that, sir. As IITs are far better. IIMs are far better. Whereas central universities are catching up with this AI. And we are trying to help with them, sir. Thank you.

Thank you, sir. Thank you, sir. Thank you.

Speaker 3

Thank you, Dr. Namanan, and thank you for having me here. It’s been very interesting and it’s been a pleasure for me to listen to all the other panelists here. Got to learn quite a lot. And congratulations on the report. So very interesting and very pertinent point that you raised, that the industry also needs to work with different players, not just with the government but also academia, and create a change. So I have a very interesting job. I work with the ecosystem and industry. industry. And in that, I get to work with different startups, get to know different ISVs and really see the innovation that’s happening. And some of these innovations are interesting to see because they are cutting edge.

They are coming from India for India and then they go for the world. Like you just mentioned, sir, Patil sir was just talking about, you know, the digital payment. And I think you were mentioning M -Pesa from a net perspective. So how we have taken the UPI and other things that we are taking this to the world. It’s a very proud moment, but it starts with an idea and it starts with something that needs to be nurtured by everyone. If you have and that’s what the AI summit, it’s a great moment for all of us. We’ve put ourselves on the world map. We’ve shown the world that we can do great. And here is where the technology innovation is happening.

And from an Intel perspective, we work not just very closely with higher ed, but also K -12 and of late, we’ve been working with. startups to come up with solutions which impact the students at large. So I was talking to somebody the other day, and I think the stage server was talking about, you know, bhojpuri getting translated. So I was talking to somebody and said, why are learning outcomes in the Indian Tier 2, Tier 3 and rural areas not as great? You know, the response came ki, bache ko maths or physics nahi samajh mein aata, yeh problem nahi hai. Bache ko English nahi samajh mein aata, yeh problem hai. Kyunki hamara teaching medium, o bache ke language mein nahi hai.

And what we are doing today in terms of making sure that the content reaches everybody in the language that they understand, I think that is going to be a game changer. And that is coming from AI, and AI is coming from a combination of people. Folks like all of us in the room coming together and saying, okay, let’s make something that will have an impact at population at large. So those are things and I was talking to you just before this. He said, India mein aisa nahi hai ki people don’t want to buy technology. People don’t, they’re not afraid of technology. But the problem is and how many of us as parents will always say, laptop nahi, bachcha ko laptop nahi dana, bachcha bighar jayega.

But why are we not seeing the value? Why are we not seeing that a creation device like a laptop or something that is more than a consumption device, where is the value creation in that? Can we have AI courses, courses starting from class 3 onwards, going up to higher ed? And we have in fact worked, my colleague of mine has worked very closely with CBSC to create a curriculum which has gone into schools, right? And we’ve worked, Intel has worked together and helped put that together. We have a program called Unnati for higher ed. And now we are bringing in these… courses which are AI for future workforce under that umbrella which has courses like AI in manufacturing and we have put this out in Gujarat Technical University and recently we had somebody come in from there.

This girl was the first time, first generation to go to a college she went through this program and in this program we also had internship. So she had interned with a startup with an industry in Surat that was doing basically textile manufacturing and she created a project on defect detection using AI. So a kid from a rural area going to college for the first time as the first generation going to college being so confident about what she had created because it was being used in an industry and she could see the impact. I mean those are the stories and those are the things that make you feel like you want to work in this. The rewards are huge.

I think that is what is needed and Intel’s obviously a great job of bringing these things together and all the programs that we have, whether it’s Unnati, whether it’s Future for Workforce, whether it’s the stuff that we do in the K -12 space. We’ve got an ISV startup that we work with which is helping teachers become AI -enabled. So creating, and it’s all running locally. The content doesn’t even need to go into the cloud. We have solutions running on AI PC, which is what Intel is now bringing to the market. And I would invite you all to please come visit our booth at, of course, AI Summit, because that’s what has brought us all here.

And we’ll show you some of the really cool use cases and demos where voice -to -voice gets translated on the device. So you don’t even need to connect to the Internet. You don’t even need to connect to the cloud. Everything is happening on the device. The content is there. And I think I heard hallucination as one problem. that is what you also in the report identified. What if the content sits locally on the device itself? So you’re only looking at class 9 science. So when a child asks about a question, maybe they’re just wanting to know how do I get into NEET and JEE, the answer is coming from there, and it’s coming in a language that the child understands.

So what if that happens, and that exists today. We’ve worked on it. So think of it as a 24 -7 tutor. And one more thing, I don’t know how many of you will relate to this, but at least I used to. When the teacher is teaching, everything was clear. But when you go home and read the same concept, what happened? How did it disappear? So when this happens, and if you’re an introverted child, who do you go and ask? And how do you create that safe space of asking? You can have tuition teachers, you can have personalizers, but if there is a bot, that is not judging this child and is saying, hey, come here, I’ll teach you in the language you understand.

Mere se pucho. And you know as a parent that this is all happening on the PC. It is all safeguarded. Or at least there is lesser chance of hallucination. That is what we are working towards. And I’ll finish with because there are all esteemed panelists, I think I should finish with a quote. Arthur C. Clarke said technology, and I’m paraphrasing, technology done right is like magic. And if we bring that magic of technology plus AI to all kids in India, I think we’ve done our job. That’s what we are doing.

Speaker 1

Thank you, Aditi. I think we have a few minutes more and we can have just, you know, a quick round intervention. Just on the issue when we are, you know, when we just try to reimagine institution. What are the two things that we want to see in the future of higher education? And, sir, if I may ask, what do you want to see in the future of higher education? What do you want to be?

Professor KK Aggarwal

Finally a girl raised her hand. She said, okay. At least somebody. She said, yes, come on. We’ll work it together. She said, sir, everything is fine. But firstly tell us what is a tile. See in that African area the tiles were never used. They were used for round rooms with round floors and square tiles or rectangular tiles were not in the dictionary. And on that basis we declare all that class failed in mathematics. That is what we are doing today with the help of simple test. So we have to find out what is the ground level situation and then go ahead on that to test the ingenuity of that. Lastly, we have not to teach the subjects.

We have to teach the students. And therefore for each student what can we do? Again I say AI is an opportunity, great opportunity. We are talking about reimagining, imagining hierarchical education in this summit and my request with all the persuasion is, let the youth assert themselves that we need these subjects to be taught for our degree. And technology enables us to do that. We will have to do that. That’s my call on this.

Speaker 1

Thank you, sir. Suresh sir, in the same manner, when you reimagine institutions and you are heading up, you know, you are part of a global body, what kind of feature and what kind of, you know, I will say two or three things you want to see in the future, you know, futuristic educational institution.

Suresh Sir

Thank you very much. Quickly, three points I would like to say. First, that if you look back 10 years back when social media was in India, there was a talk that whether we want to be a download nation or we want to an upload nation. So there was a lot of emphasis on creating content and uploading on the internet and the media so that creativity flourished. now the conversation has moved whether we want to be again a consumption nation or we want to be producing nation, we want to be creative nations now this time the opportunity is phenomenal so we need to have a system where people create not consume, that’s the fundamental shift we need now the second thing I will say that university degrees masters, PhDs undergrad for the job we have the qualification for the job, in some of the countries only high school is good enough for getting the jobs in the government in the private sector high school diploma do we want only the students to be studying getting marks, getting distinction or do we want the students to be the problem solving young society so I think we have to shift from a you know degree awarding institutions to a problem solving institution.

India has millions and trillions of problems in each and every corner. You pick up one problem, solve it. You get your degree and go. You don’t need to pass all the examination. So that’s the fundamental shift India needs. If we want to go back to what I said in the beginning, that we want to be a nation where skill, capability drives the economy, not the other way around. So that’s the second. The third one you see, the 12th education system, the higher education system, the primary education system works in silos. We have to find and technology allow it to do it to interconnect the entire systems. And in the U .S., the higher education and the high school systems are very well connected in the part of ecosystem.

The moment we do that, we will have a thriving higher education, thriving education system. Thank you. pushing India into a very high growth trajectory and also to realize the dream which I talk about, a number one nation, not by 2050, 2070, but very soon. Thank you.

Speaker 1

Thank you, sir. Pankaj sir, as a chairperson of NCT, when you reimagine a teacher education institution or think about how a teacher education institution will be in the future, what are the two or three features that come to mind that you think should be the future of a teacher education center?

Pankaj Sir

Yes, as a regulator for teacher education, now Vixit Bharat Adhishthan is coming where it has been proposed to go with AI -oriented regulator. That regulator is not supposed to have a lot of human working for it. But 70 to 80 percent assessment will be done through AI. So, it is a very good thing. Thank you. Thank you. Thank you. So AI is going to play an important role, not only as a regulator, but also as a norms and standard developer for the nation, for academic programs also and for teachers also. I think the responsibility to promote research ethics among young people is very, very critical at the moment. Somebody is writing a letter to his wife and asking AI to give me a letter.

So this is ridiculous. It cannot give you emotion into that, personalized flavor to that. So research ethics, when you are doing any research for any class level, then we need to think of assessment devices, evaluation and assessment, which is lacking behind. We are developing content through AI, but we are not doing assessment through AI. This year, CVSC is trying to. Assess class 12 answers script through technology, but those would be only scanned documents. will check by teachers from their own remote place. But that is the beginning of bringing technology into assessment. And my last point would be Indian knowledge, Indian languages. We must start working very, very hard on this because if we actually want to pass on Indian tradition to the next generation, AI can become an important tool for that.

If we take AI out of Western knowledge, if we promote AI in Indian knowledge, Indian context, Indian languages, then we will really solve the next generation. And as the Prime Minister said, we have two AIs, as Pride India and Artificial Intelligence. So we must take both of them to optimum use. Thank you.

Speaker 1

Thank you, sir. Patil sir, from the ministry perspective, how you visualize future universities, and what kind of change you want to bring higher education institutions? which we want to build for the future.

Patil Sir

Again, same thing that Sir has told that it should be integrated. School and higher education, I would like to say that few universities have agreed to reach out to 100 schools. In Pune, there is one university called COEP. So they are telling that every day one school will come, visit, see their libraries, see their laboratories, meet their teachers. The teachers will go to the schools, they will interact. Because many of them are not knowing what is the present school. And what I was in the school and today’s school, there is huge change. Really huge change is there. So that has to be seen and it should be integrated. One more point that NEP says there is innate talent among the students.

So students should understand that and work on it, on your skills and meaningfully contribute to the economy which is very, very important. So once 140 crore population of India started contributing to the economy means above the income tax level, I am telling that the minimum 5 lakhs or 6 lakhs. It is going to be huge change here. Third point is brick mortar schools are going, universities are going. That is already we are seeing this huge change. But same time, teachers cannot be removed actually. The teachers, mentors, facilitators has to be there. And even we requested, even Intel we had last time meeting also with the companies to be mentor actually. You should also tell kids enough is enough.

One hour up you are playing with the games or you are using these things. So stop it there, which is really required. So ethical use is very, very important. Yes, we need to create a platform where all of the people can come. That is what EI, COE in education happening with Madras IIT where schools and higher educations are coming together, higher institutions are coming together, private players are also coming together. I recently seen one startup in IIT Delhi. where they don’t like this hotel rooms and all that. So he not want any hotel rooms at all. Like that, these startup don’t have any classrooms, they don’t have any infrastructure at all. But they teach in medical education actually with this permission from the regulator, paramedician basically are working it.

Youngsters are here, lot of youngsters are there, friends. Their annual turnover is 200 crore in just last two years. They are telling another one year will reach 400 crore. So I think there is huge opportunity for all of us. We should work on it. Thank you so much.

Speaker 1

Thank you, sir. Aditi, your comment on future of institution.

Speaker 3

I think everybody has done a great job of articulating that. I think everybody has done a great job.

Speaker 1

Thank you everyone for joining us and thank you for our eminent panel to put light on reimagining the institutions. And I think that what we are thinking about how the future institutions will be, when we start thinking about it, it will start to grow. And thank you everyone. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (30)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Speaker 1 is from the Centre for Policy Research and Governance (CPRG).”

The knowledge base identifies Speaker 1 as the session moderator/host from CPRG, confirming the affiliation mentioned in the report [S4] and also notes Dr. Ramanand Nand as a CPRG representative [S11].

Additional Contextmedium

“Approximately half of the surveyed students use generative‑AI tools such as ChatGPT or Gemini multiple times a week.”

While the report cites a Delhi private-school survey, broader data show that 64 % of U.S. teens have used an AI chatbot and about 30 % use one daily, indicating that high-frequency AI use among adolescents is common globally [S79].

Confirmedhigh

“Students report frequent “hallucinations” and reduced reliability of AI for logical or numerical tasks.”

AI hallucinations are a well-documented limitation of large language models, as described in the knowledge base on the phenomenon of fabricated truth [S119].

Additional Contextmedium

“Heavy reliance on AI tools may weaken thinking skills and reduce cognitive effort.”

Research cited in the knowledge base indicates that extensive AI use can lead to reduced brain activity during writing tasks and diminish the productive struggle essential for learning [S120]; similar concerns are raised about AI-mediated learning undermining deep cognition [S121].

External Sources (123)
S1
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S2
S3
Advancing Scientific AI with Safety Ethics and Responsibility — – Speaker 1- Speaker 2- Speaker 3 – Speaker 1- Speaker 3- Moderator
S4
AI 2.0 The Future of Learning in India — -Pankaj Sir: Chairperson of National Council of Teacher Education (NCTE), former head and dean at University of Delhi, e…
S5
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S6
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S7
S8
AI 2.0 The Future of Learning in India — – Patil Sir- Pankaj Sir- Professor KK Aggarwal – Patil Sir- Pankaj Sir- Speaker 1 – Patil Sir- Suresh Sir- Speaker 1 …
S9
https://dig.watch/event/india-ai-impact-summit-2026/ai-2-0-reimagining-indian-education-system — Thank you Pranavji for the presentation. Today as a panelist now we have Professor KK Agarwal sir, President South Asian…
S10
AI 2.0 The Future of Learning in India — -Professor KK Aggarwal: President of South Asian University, former Vice-Chancellor who developed Indraprastha Universit…
S11
AI 2.0 Reimagining Indian education system — -Professor K. K. Aggarwal- President of South Asian University, former developer of Indraprastha University, expertise i…
S12
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S13
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S14
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S15
AI 2.0 The Future of Learning in India — – Pranav Kothari- Patil Sir – Pranav Kothari- Professor KK Aggarwal – Pranav Kothari- Speaker 2
S16
AI 2.0 The Future of Learning in India — – Speaker 2- Patil Sir- Suresh Sir
S17
AI 2.0 Reimagining Indian education system — Thank you Pranavji for the presentation. Today as a panelist now we have Professor KK Agarwal sir, President South Asian…
S18
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — Clear principles and regulations need to be set Overall, AI governance requires collaboration, inclusivity, transparenc…
S19
Closing Ceremony — Multiple speakers addressed the transformative challenges posed by artificial intelligence and the need for new approach…
S20
AI Governance Dialogue: Steering the future of AI — Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance with someone who has a un…
S21
Building Trustworthy AI Foundations and Practical Pathways — The current AI revolution represents an even more fundamental shift: the emergence of general software. Unlike tradition…
S22
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And these vision models are actually very good for document digitization. They’re very good at language layout understan…
S23
Powering AI Global Leaders Session AI Impact Summit India — Must work to close the capability gap through improved access, literacy, and agency initiatives
S24
Leaders TalkX: ICT Applications Unlocking the Full Potential of Digital – Part II — The inadequacy of established models and strategies, often adopted by the most affluent economies, was criticised for ne…
S25
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — This concept of ‘digital solidarity’ became a recurring theme throughout the discussion. Pedro Ivo later referenced it d…
S26
High Level Session 3: AI &amp; the Future of Work — Chris Yiu: Good morning, everyone. Real pleasure to be here with you today to talk about such an important topic around …
S27
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Transformation requires a triple helix of government, academia, and industry working together with specific roles for ea…
S28
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S29
Artificial Intelligence &amp; Emerging Tech — language translation tools and health diagnostic apps can function without continuous online access
S30
Gemini Robotics On-Device: Google’s AI model for offline robotic tasks — On Tuesday, 24 June, Google’s DeepMind division announced the release of a new large language model namedGemini Robotics…
S31
https://dig.watch/event/india-ai-impact-summit-2026/ai-2-0-the-future-of-learning-in-india — And we’ll show you some of the really cool use cases and demos where voice -to -voice gets translated on the device. So …
S32
AI Innovation in India — So that is one of the other programs that we have. Post that, there’s a litmus test that we do and that we help out with…
S33
Education meets AI — Artificial intelligence has the potential to revolutionize education by offering personalized learning experiences to ev…
S34
Bottom-up AI and the right to be humanly imperfect | IGF 2023 — Jovan Kurbalija:Thank you. The AI charge EPT won’t reply in this way, you know, therefore it is really smart. Thank you….
S35
AI &amp; Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — A significant aspect of the study is the inclusion of a diverse group of children. The researchers aimed to have a large…
S36
WS #376 Elevating Childrens Voices in AI Design — National survey of around 800 children between ages 8-12, their parents and carers, and 1000 teachers across the UK Res…
S37
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — As emphasized throughout the discussion, India possesses the fundamental ingredients for AI leadership. The challenge li…
S38
Empowering India &amp; the Global South Through AI Literacy — The discussion acknowledged several ongoing challenges. The scale required to reach India’s vast educational system pres…
S39
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — 150 are replaced by AI. And in research analysts, there we are actually just able to do much more high-quality research….
S40
AI (and) education: Convergences between Chinese and European pedagogical practices — ### Transformation Rather Than Replacement Jovan Kurbalija: Definitely, just building on one comment, let’s say, thinki…
S41
AI-generated Jesuses spark concern over faith and bias — AI chatbots modelled on Jesusare becomingincreasingly popular over Christmas, offering companionship or faith guidance t…
S42
The Global Power Shift India’s Rise in AI &amp; Semiconductors — Joining us is Professor Vivek Kumar Singh, Senior… advisor on science and technology at NITI IO. Professor Singh plays…
S43
From India to the Global South_ Advancing Social Impact with AI — Because have you ever seen skill books anytime? For a plumber or a painter. Most of the books has images without a descr…
S44
Bridging the Digital Divide: Inclusive ICT Policies for Sustainable Development — The discussion maintained a formal, academic tone throughout, characteristic of a research presentation or conference se…
S45
Meeting REPORT — In conclusion, the discussions depicted an organisational mindset pivoting towards greater inclusivity, judicious resour…
S46
Cooperation in a Divided World / DAVOS 2025 — The tone was primarily informative and analytical, with speakers presenting data and insights in a professional manner. …
S47
Keynote-Jeet Adani — The tone was consistently aspirational, patriotic, and strategic throughout. Jeet Adani maintained a confident, visionar…
S48
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — Perhaps most significantly, the summit revealed that AI for science represents not just an acceleration of existing rese…
S49
How to make AI governance fit for purpose? — Legal and regulatory | Development The AI revolution is fundamentally challenging the governance structures as we know …
S50
From Technical Safety to Societal Impact Rethinking AI Governanc — The session opened with Virginia Dignum’s foundational argument that fundamentally reframed the AI safety debate. Rather…
S51
AI 2.0 The Future of Learning in India — The path forward requires simultaneous attention to immediate practical challenges—infrastructure development, teacher t…
S52
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Xianhong Hu:Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able…
S53
Artificial intelligence — Multilingualism
S54
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Preserving multilingual societies is essential because different language structures enable different ways of thinking a…
S55
AI as a tech ally in saving endangered languages — Small states and indigenous nations can leverage these tools to increase participation in global negotiations. When lang…
S56
What policy levers can bridge the AI divide? — ## Infrastructure as Foundation Lacina Kone: Before talking about the bridging of AI, bridging the gap of the AI, there…
S57
Open Forum #33 Building an International AI Cooperation Ecosystem — Ricardo Pelayo: Hi, good afternoon. It’s an honor to share with you this reflection on building an ecosystem of innovati…
S58
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Demands on policy exist without the building blocks to support its implementation Lack of infrastructure, skills, compu…
S59
Empowering India &amp; the Global South Through AI Literacy — The discussion acknowledged several ongoing challenges. The scale required to reach India’s vast educational system pres…
S60
Why science metters in global AI governance — But now I don’t know what is the causal factor there. I don’t know if the causal factor is whether they are using AI mor…
S61
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — Development | Human rights | Online education UNESCO is providing policy guidance on AI in education, focusing on frame…
S62
New Colours of Knowledge — – MEASURE 3.2.3. Establish and implement mechanisms for the overall assessment of the four segments of tasks of teaching…
S63
Strategy outline — – 5.1 Upgrade the role of the MoI, develop its texts and capacities, and render it a reliable source of information, par…
S64
Part 2.5: AI reinforcement learning vs human governance — Governance structures are designed to maintain order, protect rights, and promote welfare, often requiring consensus and…
S65
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S66
Comprehensive Report: European Approaches to AI Regulation and Governance — Governance structure – centralized vs. distributed
S67
AI 2.0 Reimagining Indian education system — Here, there is an overwhelming evaluation by students that while AI tools might be helpful, they are not necessarily pro…
S68
The National Education Association approves AI policy to guide educators — The US National Education Association (NEA) Representative Assembly (RA) delegates haveapprovedthe NEA’s first policy st…
S69
AI reshapes university language classrooms — Universities are increasinglyintegratingAI into foreign language teaching as lecturers search for more flexible and pers…
S70
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S71
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S72
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — The disagreement level is moderate but significant for policy implications. While there’s consensus on the core challeng…
S73
Driving Enterprise Impact Through Scalable AI Adoption — The panellists agreed that rather than viewing AI as a replacement for human teachers, the future likely involves an aug…
S74
AI (and) education: Convergences between Chinese and European pedagogical practices — 1. **Universities and teachers remain essential** but must transform from knowledge transmitters to coaches and facilita…
S75
Can AI replace the transmission of wisdom? — However, in all these cases, we must keep the role of AI as a supportive tool, not as a teacher. This is because technol…
S76
WS #376 Elevating Childrens Voices in AI Design — National survey of around 800 children between ages 8-12, their parents and carers, and 1000 teachers across the UK Res…
S77
AI for Social Good Using Technology to Create Real-World Impact — But all of that was enabled by this DPI called NICSHA, this database, which enables all of this. One other quick example…
S78
Responsible AI for Children Safe Playful and Empowering Learning — Absolutely. We need to generate a fair amount of evidence before we rush to scale with something like this. Although we …
S79
Three in ten US teens now use AI chatbots every day, survey finds — According to new data from the Pew Research Center, roughly 64% ofUSteens (aged 13–17) say they haveusedan AI chatbot; a…
S80
Global AI adoption rises quickly but benefits remain unequal — Microsoft’s AI Economy Institute hasreleased its 2025 AI Diffusion Report, detailing global AI adoption, innovation hubs…
S81
AI 2.0 Reimagining Indian education system — However, achieving global leadership requires addressing substantial infrastructure and equity challenges. The success o…
S82
Empowering India &amp; the Global South Through AI Literacy — The discussion acknowledged several ongoing challenges. The scale required to reach India’s vast educational system pres…
S83
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosyst…
S84
AI 2.0 The Future of Learning in India — All speakers acknowledge the significant challenge of unequal access to technology and infrastructure across different r…
S85
Research shows AI complements, not replaces, human work — AI headlines often flip between hype and fear, but the truth is more nuanced. Much research is misrepresented, with task…
S86
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — 150 are replaced by AI. And in research analysts, there we are actually just able to do much more high-quality research….
S87
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — Moreover, while AI and new technologies have significant potential in agriculture, it is crucial to understand that they…
S88
Workshop 1: AI &amp; non-discrimination in digital spaces: from prevention to redress — Robin Aïsha Pocornie: I also think that it is important to note that intersectional discrimination as it’s defined right…
S89
Powering AI Global Leaders Session AI Impact Summit India — Education as a lever to close the gap
S90
Skilling and Education in AI — Strategic actions for India by 2030
S91
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And Prime Minister, we believe that nations should always build the strongest intelligence infrastructure and cross -bor…
S92
Cooperation in a Divided World / DAVOS 2025 — The tone was primarily informative and analytical, with speakers presenting data and insights in a professional manner. …
S93
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S94
Fourth meeting of the UN CSTD multi-stakeholder working group on data governance at all levels — The programme begins on Tuesday morning with opening remarks and the formal adoption of the agenda. The UNCTAD secretari…
S95
AI and Data Driving India’s Energy Transformation for Climate Solutions — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s insights rather than…
S96
World in Numbers: Risks / DAVOS 2025 — The tone was primarily analytical and academic, with the speakers providing objective overviews of the report’s findings…
S97
WS #219 Generative AI Llms in Content Moderation Rights Risks — The discussion maintained a consistently serious and concerned tone throughout, with speakers demonstrating deep experti…
S98
Open Forum #29 Advancing Digital Inclusion Through Segmented Monitoring — The discussion maintained a collaborative and constructive tone throughout, with panelists building on each other’s insi…
S99
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — The discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportuni…
S100
Scaling AI for Billions_ Building Digital Public Infrastructure — The discussion maintained a balanced but cautionary tone throughout. While panelists acknowledged the tremendous opportu…
S101
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S102
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S103
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S104
Indias Roadmap to an AGI-Enabled Future — The discussion maintained an optimistic and ambitious tone throughout, with speakers expressing confidence in India’s ab…
S105
Smart Regulation Rightsizing Governance for the AI Revolution — The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but …
S106
Opening remarks — The assembled minds, representing a synergy of technology, governance, and civil society, exemplify the event’s global i…
S107
Creating Eco-friendly Policy System for Emerging Technology — Speaker 1:…amongst you today to share some thoughts and ideas on the importance of eco-friendly emerging technologies …
S108
Mary Meeker examines AI and higher education — Mary Meeker, renowned for her annual ‘Internet Trends’ reports, hasreleasedher first study in over four years, focusing …
S109
OpenAI partners with Arizona State University to bring advanced AI model to higher education — OpenAI has announced a partnership with Arizona State University (ASU), making it their first collaboration with a highe…
S110
Town Hall: How to Trust Technology — Additionally, Thompson raises concerns about potential job losses, particularly in the digital space, due to emerging te…
S111
The future of work: preparing for automation and the gig economy — The report suggests several measures to ‘help people adjust to the new technologies’: education and (re)training, suppor…
S112
Keynote-Sam Altman — Regarding employment disruption, he acknowledged that existing jobs will face displacement, noting “it’ll be very hard t…
S113
Is AI a catalyst for development? — The Economist argues that AI has the potential to revolutionise developing countries by transforming their economies and…
S114
AI tools influence modern personal finance practices — Personal finance assistants powered byAI toolsare increasingly helping users manage budgets, analyse spending, and organ…
S115
How to keep your data safe while using generative AI tools — Generative AI tools have become a regular part of everyday life, both professionally and personally. Despite their usefu…
S116
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — Dismissing AI’s potential role in education is both futile and misguided. The technology exists, students are using it, …
S117
AI cheating scandal at University sparks concern — Hannah, a university student,admits to using AIto complete an essay when overwhelmed by deadlines and personal illness. …
S118
AI shows promise in scientific research tasks — FrontierScience, a new benchmark from OpenAI,evaluates AI capabilities for expert-level scientific reasoningacross physi…
S119
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S120
Is AI making us mentally lazy — Growing use of AI chatbots for tasks such as writing, analysing data, and problem-solving has sparkedconcernsthat relyin…
S121
AI in schools: The reality is messier than the solutions — Cognitive scientists call this’desirable difficulty’. Learning that comes too easily often doesn’t stick. The brain buil…
S122
Protecting Democracy against Bots and Plots — Additionally, Agrawal emphasizes the potential of technology in identifying and addressing disinformation, envisioning a…
S123
Pioneering Responsible Global Governance for Quantum Technologies — In sum, the panel shed light on the challenge of regulating private sector influence in governance and mitigating the in…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument68 words per minute1284 words1122 seconds
Argument 1
Emphasised that AI transformation demands rethinking institutional structures, curricula and governance
EXPLANATION
The opening remarks highlighted that the rapid emergence of AI creates fear about skill relevance and calls for a fundamental redesign of institutions, curricula and governance mechanisms to prepare for future jobs. The speaker positioned AI as a catalyst that forces a strategic rethink of how education systems operate.
EVIDENCE
The speaker noted the growing fear that existing skills may become irrelevant and stressed the need to understand the transformation, future skills and jobs, and to launch reports that address these issues [10-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for new governance structures aligns with AI governance principles that stress equity, accountability and multi-stakeholder collaboration [S18]; the need for a 360° paradigm shift in education is highlighted in the AI 2.0 sessions [S11][S4]; and the recommendation for triple-helix collaboration mirrors the trusted AI framework discussion [S27].
MAJOR DISCUSSION POINT
Framing the AI transformation discussion
AGREED WITH
Pankaj Sir
DISAGREED WITH
Pankaj Sir, Professor KK Aggarwal
S
Speaker 2
3 arguments162 words per minute1035 words382 seconds
Argument 1
AI represents a 360° paradigm shift; institutions that do not adapt will be fossilised
EXPLANATION
AI is described as a comprehensive, 360‑degree shift that will reshape societies, and any institution that fails to engage with this new reality risks becoming obsolete. The speaker warns that staying static will lead to fossilisation.
EVIDENCE
The speaker called AI a “paradigm shift… a 360 degree shift” and said that organizations not adapting will be fossilized [97-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The 360° paradigm shift of AI is described in the AI 2.0 Reimagining Indian education system and Future of Learning sessions [S11][S4]; the warning that organisations must adapt to avoid obsolescence is echoed in the trusted AI triple-helix model [S27].
MAJOR DISCUSSION POINT
Reimagining higher education and institutions for the AI era
AGREED WITH
Speaker 1, Professor KK Aggarwal, Pankaj Sir, Patil Sir
DISAGREED WITH
Patil Sir, Pranav Kothari
Argument 2
India must become a global AI leader; AI can dismantle language barriers and expand access
EXPLANATION
The speaker argues that AI can break linguistic barriers, allowing communication across languages, and that India must seize this opportunity to become a world leader in AI, leveraging its large population and youthful demographic.
EVIDENCE
He described AI dismantling language barriers, giving examples of translation from Bhojpuri to multiple languages, and emphasized India’s potential to lead globally in AI [138-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for India to close the capability gap and assume a global AI leadership role is emphasized in the AI Impact Summit session [S23]; language-translation tools that work offline are documented in the emerging-tech overview and device-level demonstrations [S29][S31]; on-device AI models enabling offline operation are described in Gemini Robotics On-Device [S30].
MAJOR DISCUSSION POINT
Reimagining higher education and institutions for the AI era
AGREED WITH
Speaker 3, Patil Sir
Argument 3
Shift from a consumption‑driven model to a creation‑driven, problem‑solving nation
EXPLANATION
The speaker calls for a transition from a society that mainly consumes digital content to one that creates, solves problems, and drives innovation, especially through AI‑enabled education and skill development.
EVIDENCE
He outlined the need for a system where people create rather than consume, and advocated moving from degree-awarding to problem-solving institutions [387-401].
MAJOR DISCUSSION POINT
Future vision for education outcomes and societal impact
AGREED WITH
Suresh Sir, Patil Sir, Pankaj Sir
S
Speaker 3
3 arguments168 words per minute1256 words447 seconds
Argument 1
Collaboration among industry, academia and startups creates localized AI solutions (e.g., language translation)
EXPLANATION
The speaker highlights that partnerships between Intel, startups, and academic institutions are producing AI tools tailored to local languages and contexts, such as real‑time translation, which can improve learning outcomes.
EVIDENCE
He described working with startups to develop AI that translates Bhojpuri and other regional languages, noting the impact on rural learners and the importance of ecosystem collaboration [292-298][304-309].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The triple-helix model of government, academia and industry is highlighted as essential for AI transformation [S27]; collaborations with research institutions to build adaptive AI learning systems are described in Education meets AI [S33]; support for startup incubation and revenue models is discussed in the AI Innovation in India briefing [S32].
MAJOR DISCUSSION POINT
Industry and private‑sector contributions to AI‑enabled education
AGREED WITH
Speaker 2, Patil Sir
Argument 2
AI devices can operate offline, delivering real‑time translation and 24/7 tutoring without cloud dependence
EXPLANATION
The speaker points out that Intel’s AI‑enabled hardware can run locally, providing translation and tutoring services without needing internet connectivity, thereby expanding reach to underserved areas.
EVIDENCE
He mentioned AI PCs that run locally, offering voice-to-voice translation and tutoring without cloud or internet connections [332-337].
MAJOR DISCUSSION POINT
Industry and private‑sector contributions to AI‑enabled education
Argument 3
Intel’s programmes (Unnati, Future for Workforce) provide AI curricula, internships and AI‑enabled teaching tools
EXPLANATION
Intel has launched educational programmes that embed AI into curricula from K‑12 to higher education, offering internships and industry‑relevant projects that demonstrate AI’s practical applications.
EVIDENCE
He cited the Unnati programme, the Future for Workforce initiative, and a case where a rural student created an AI-based defect-detection project during an internship [321-326].
MAJOR DISCUSSION POINT
Industry and private‑sector contributions to AI‑enabled education
P
Professor KK Aggarwal
3 arguments150 words per minute573 words227 seconds
Argument 1
AI should augment creativity, not shortcut thinking
EXPLANATION
The professor stresses that AI must be used to enhance creative processes rather than replace them, warning that over‑reliance could diminish critical thinking skills among learners.
EVIDENCE
He said AI should “supplements our creativity it does not give us a shortcut to creativity and thereby reduce our thinking powers” [82-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The distinction between AI as a tool versus a replacement is explored in the Bottom-up AI discussion on human imperfection [S34]; ethical considerations about preserving human creativity are raised in the transparency and explainability session [S28].
MAJOR DISCUSSION POINT
AI as supplement versus replacement in teaching and learning
AGREED WITH
Patil Sir
DISAGREED WITH
Pankaj Sir, Speaker 1
Argument 2
Vision beyond incremental reforms: focus on skill‑based, problem‑solving curricula powered by AI
EXPLANATION
The professor argues that merely tweaking existing curricula is insufficient; instead, education should be reimagined to prioritize skill development and problem‑solving, with AI as an enabling tool.
EVIDENCE
He noted that “this is not the time for doing the reforms in the higher education system. It’s like reimagining” and emphasized a skill-based, problem-solving focus [78-84].
MAJOR DISCUSSION POINT
Reimagining higher education and institutions for the AI era
AGREED WITH
Speaker 1, Speaker 2, Pankaj Sir, Patil Sir
Argument 3
Empower students to shape curricula and demand AI‑enhanced subjects for their degrees
EXPLANATION
The professor calls for student agency in curriculum design, urging that learners should be able to request AI‑related subjects and influence degree structures to stay relevant in the AI era.
EVIDENCE
He urged “let the youth assert themselves that we need these subjects to be taught for our degree” and emphasized teaching students rather than subjects [381-384].
MAJOR DISCUSSION POINT
Future vision for education outcomes and societal impact
P
Pankaj Sir
4 arguments132 words per minute1189 words536 seconds
Argument 1
Teachers remain essential as mentors, ethical guides and facilitators of inquiry
EXPLANATION
Pankaj stresses that teachers cannot be replaced by AI; instead, they will evolve into mentors and designers of learning experiences, providing ethical guidance and fostering inquiry.
EVIDENCE
He described teachers as “mentors and learning designers, not learning followers” and emphasized AI as an assistant, not a master [176-180].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The ethical imperative to keep humans central in AI-augmented learning is highlighted in the transparency and explainability session [S28]; the view of AI as a tool supporting, not replacing, educators is reinforced in the Bottom-up AI discussion [S34].
MAJOR DISCUSSION POINT
AI as supplement versus replacement in teaching and learning
AGREED WITH
Professor KK Aggarwal
Argument 2
Establish AI‑driven regulator for teacher education; 70‑80 % of assessment to be AI‑based
EXPLANATION
Pankaj proposes creating a regulator that leverages AI to conduct the majority of assessments, reducing human workload and increasing efficiency in teacher education evaluation.
EVIDENCE
He outlined a new AI-oriented regulator where “70 to 80 percent assessment will be done through AI” [404-408].
MAJOR DISCUSSION POINT
Policy, governance and regulatory frameworks for AI integration
AGREED WITH
Speaker 1
DISAGREED WITH
Speaker 1, Professor KK Aggarwal
Argument 3
National programs (NPST, NMM) use AI to match mentors with mentees and support teacher development
EXPLANATION
The speaker highlights two national initiatives that employ AI to analyse queries, identify suitable mentors, and facilitate teacher‑student matching, thereby enhancing professional development.
EVIDENCE
He explained that NPST and NMM are “designed on a digital platform… AI is helping us analysing people’s queries… identifying the right mentor” [192-195].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Efforts to close the AI capability gap through improved access, literacy and agency are emphasized in the AI Impact Summit session [S23]; AI-based mentor-matching platforms are consistent with the broader push for AI-enabled capacity building.
MAJOR DISCUSSION POINT
Policy, governance and regulatory frameworks for AI integration
AGREED WITH
Patil Sir
Argument 4
Promote Indian knowledge systems and languages within AI to preserve cultural heritage
EXPLANATION
Pankaj argues that AI should be trained on Indian knowledge and languages rather than Western datasets, ensuring cultural preservation and relevance for Indian learners.
EVIDENCE
He called for “working very, very hard… if we take AI out of Western knowledge, if we promote AI in Indian knowledge, Indian languages” [419-424].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of digital solidarity and the importance of local knowledge in AI development are discussed in the Crossroads between Sovereignty and Sustainability briefing [S25]; capacity-building initiatives for indigenous AI datasets are highlighted in the AI Impact Summit [S23].
MAJOR DISCUSSION POINT
Future vision for education outcomes and societal impact
P
Patil Sir
5 arguments151 words per minute2136 words847 seconds
Argument 1
Integration of school and higher‑education ecosystems; AI becomes the “spine” of the education system
EXPLANATION
Patil emphasizes that AI should link school and higher‑education sectors, acting as the central backbone that supports learning processes across the entire education continuum.
EVIDENCE
He stated “AI is spine of entire education system nowadays” and described the need for integrated ecosystems [197-199].
MAJOR DISCUSSION POINT
Reimagining higher education and institutions for the AI era
AGREED WITH
Speaker 1, Professor KK Aggarwal, Speaker 2, Pankaj Sir
Argument 2
Urgent need to build AI literacy among teachers and bridge the digital‑infrastructure divide
EXPLANATION
Patil points out the massive gap in AI‑savvy teachers and the lack of infrastructure in many schools, calling for urgent capacity‑building and investment to ensure equitable AI adoption.
EVIDENCE
He noted that only 4-5 lakh schools have ICT labs, that 1 crore teachers are largely not AI-literate, and highlighted rural-urban disparities [221-227].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Closing the capability gap through AI literacy and equitable infrastructure is a key recommendation of the AI Impact Summit session [S23]; equity and accountability in AI deployment are stressed in the AI Governance principles [S18].
MAJOR DISCUSSION POINT
Policy, governance and regulatory frameworks for AI integration
AGREED WITH
Pankaj Sir
Argument 3
Ethical stance: treat AI as a tool, not a human, to avoid misuse and mental‑health risks
EXPLANATION
Patil warns against anthropomorphising AI, urging that it be regarded as a machine to prevent over‑reliance, mental stress, and ethical pitfalls.
EVIDENCE
He said “AI is a machine… if we start taking it as a human being then it will be a problem… AI is a boon if used properly, a bane if misused” [234-239].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Bottom-up AI discussion emphasizes AI as a tool rather than a human replacement [S34]; ethical considerations around transparency and explainability are explored in the high-level session on AI ethics [S28].
MAJOR DISCUSSION POINT
Policy, governance and regulatory frameworks for AI integration
AGREED WITH
Professor KK Aggarwal
Argument 4
Significant investment in AI Centres of Excellence, MOUs with tech firms, and AI schools on IIT campuses
EXPLANATION
Patil outlines ongoing large‑scale investments, partnerships with global tech companies, and the establishment of AI Centres of Excellence to accelerate AI integration in education.
EVIDENCE
He listed “Lot of investment… AI COE in education… IIT Madras hosting that… MOUs with Google, Microsoft… AI schools on IIT campuses” [239-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Programmes supporting AI startups, revenue models and industry partnerships are outlined in the AI Innovation in India briefing [S32]; the triple-helix collaboration model underscores the role of MOUs with global tech firms [S27].
MAJOR DISCUSSION POINT
Policy, governance and regulatory frameworks for AI integration
DISAGREED WITH
Pranav Kothari, Speaker 2
Argument 5
Aim for “Vixit Bharat 2047”: AI as a catalyst for exponential economic growth and higher‑skill employment
EXPLANATION
Patil envisions AI driving India toward a prosperous future by 2047, delivering massive economic gains, skill development, and positioning the country as a global AI leader.
EVIDENCE
He declared “AI plus education can take us towards Vixit Bharat 2047… AI is not a choice… providing multiple new methods of research, new methods of industrial internship” [207-210].
MAJOR DISCUSSION POINT
Future vision for education outcomes and societal impact
AGREED WITH
Speaker 2, Suresh Sir, Pankaj Sir
P
Pranav Kothari
6 arguments160 words per minute1084 words404 seconds
Argument 1
High usage among private‑school students (≈50%)
EXPLANATION
The survey found that roughly half of the private‑school students sampled in Delhi reported using AI‑based tools regularly, indicating a substantial penetration of AI in school settings.
EVIDENCE
He reported “almost 50 % of them use AI based tools… multiple times a week” based on the Delhi private-school sample [25-26].
MAJOR DISCUSSION POINT
Current AI adoption in school education (survey findings)
Argument 2
Primary purposes: searching academic information and writing assistance
EXPLANATION
Students mainly employ AI to look up academic content and to obtain help with writing tasks, rather than for advanced problem solving or calculations.
EVIDENCE
He explained that AI use is “concentrated for generally searching for new academic information while studying or writing assistance” [32].
MAJOR DISCUSSION POINT
Current AI adoption in school education (survey findings)
Argument 3
Students perceive AI as helpful for school‑exam and entrance‑exam preparation
EXPLANATION
Learners view AI tools as beneficial for preparing both school examinations and competitive entrance tests, suggesting perceived value in exam‑related study.
EVIDENCE
He noted “relatively high perceived helpfulness of AI platforms for both studying for school exams and entrance exams” [39].
MAJOR DISCUSSION POINT
Current AI adoption in school education (survey findings)
Argument 4
Major challenges: frequent hallucinations and low accuracy in logical/numerical tasks
EXPLANATION
A significant proportion of students encounter incorrect or fabricated AI outputs (hallucinations) and report that AI performs poorly on logical or numerical problems, limiting its reliability.
EVIDENCE
He reported that “students regularly encounter AI hallucination” and that “accuracy for logical or numerical subjects is relatively lower” [46-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Issues of AI hallucination and the need for explainability are highlighted in the transparency and explainability high-level session [S28].
MAJOR DISCUSSION POINT
Current AI adoption in school education (survey findings)
DISAGREED WITH
Patil Sir, Speaker 2
Argument 5
Learners overwhelmingly prefer human interaction over AI tutors
EXPLANATION
The data show a strong preference among students for traditional, in‑person teaching rather than AI‑based tutoring, positioning AI as a supplementary aid rather than a replacement.
EVIDENCE
He stated there is “overwhelming support for the idea that students still prefer traditional human interaction based learning” and that AI is “supplementary” [56-57].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Bottom-up AI discussion stresses the importance of human mentorship and cautions against over-reliance on AI, supporting the preference for human interaction [S34].
MAJOR DISCUSSION POINT
AI as supplement versus replacement in teaching and learning
AGREED WITH
Professor KK Aggarwal, Pankaj Sir
Argument 6
Existing AI tools are less effective than YouTube/ICT for adaptive, personalized learning
EXPLANATION
When compared with established platforms like YouTube and ICT‑based resources, AI tools received lower ratings for providing adaptive, individualized learning experiences.
EVIDENCE
He reported that “there’s still overwhelming support for YouTube video or ICT based learning tools” and that AI tools are not yet delivering personalized solutions [50-53].
MAJOR DISCUSSION POINT
AI as supplement versus replacement in teaching and learning
S
Suresh Sir
3 arguments154 words per minute395 words153 seconds
Argument 1
Advocates a shift from a consumption‑driven digital culture to a creation‑driven one, urging the development of systems that enable people to produce content and exercise creativity.
EXPLANATION
Suresh argues that India must move beyond merely consuming digital media and instead foster a environment where citizens create and upload content, leveraging AI to unlock creative potential.
EVIDENCE
He recalls the earlier debate about being a “download nation” versus an “upload nation” and states that the current conversation has moved to whether India will remain a consumption nation or become a creative, production-focused nation, calling for a system that enables people to create rather than just consume [388-390].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Future of Learning session calls for a transition to a “creative nation” where people create rather than consume [S4]; the Leaders TalkX analysis critiques consumption-centric models and calls for a paradigm shift toward creation [S24].
MAJOR DISCUSSION POINT
Future vision for education outcomes and societal impact
Argument 2
Calls for aligning higher‑education qualifications with labour‑market needs, emphasizing skill‑based, problem‑solving degrees rather than traditional credentialism.
EXPLANATION
He proposes that university degrees should be directly linked to job requirements, allowing students to earn credentials by solving real‑world problems, thereby making skill and capability the driver of the economy.
EVIDENCE
He notes that in some countries a high-school diploma suffices for many jobs, and suggests that in India students should be able to pick a problem, solve it, and receive a degree, emphasizing that skill and capability should drive the economy rather than degrees dictating skill development [390-394].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Closing the capability gap by aligning education with labour-market demands is a focus of the AI Impact Summit session [S23]; the triple-helix approach to redesigning higher-education curricula is emphasized in the trusted AI framework [S27].
MAJOR DISCUSSION POINT
Future vision for education outcomes and societal impact
Argument 3
Urges breaking the silos between primary, secondary and higher education by using technology to interconnect the entire education system.
EXPLANATION
Suresh points out that the current education system operates in isolated layers and argues that technology should be leveraged to create seamless pathways across all levels, citing the integrated U.S. model as an example.
EVIDENCE
He observes that the 12th-grade, higher-education and primary education systems work in silos and stresses that technology must be employed to interconnect them, referencing how the United States has well-connected high-school and higher-education ecosystems [395-398].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The trusted AI framework advocates for integrated ecosystems linking school and higher-education sectors [S27]; technology-enabled personalized learning across education levels is described in Education meets AI [S33].
MAJOR DISCUSSION POINT
Reimagining higher education and institutions for the AI era
Agreements
Agreement Points
AI transformation requires rethinking institutional structures, curricula and governance
Speakers: Speaker 1, Professor KK Aggarwal, Speaker 2, Pankaj Sir, Patil Sir
Emphasised that AI transformation demands rethinking institutional structures, curricula and governance Vision beyond incremental reforms: focus on skill‑based, problem‑solving curricula powered by AI AI represents a 360° paradigm shift; institutions that do not adapt will be fossilised Integration of school and higher‑education ecosystems; AI becomes the “spine” of the education system Integration of school and higher‑education ecosystems; AI becomes the “spine” of the education system
All five speakers highlighted that the rapid emergence of AI forces a fundamental redesign of educational institutions, curricula and governance mechanisms to prepare for future jobs and skills [10-13][78-84][97-100][197-199][197-199].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors calls for new institutional structures and governance models to accommodate AI-driven change, as highlighted in the France-India AI summit and recent AI governance literature [S48][S49][S50].
Teachers remain essential as mentors and ethical guides; AI should be an assistant, not a replacement
Speakers: Professor KK Aggarwal, Pankaj Sir
AI should augment creativity, not shortcut thinking Teachers remain essential as mentors, ethical guides and facilitators of inquiry
Both speakers stressed that AI must support, not supplant, human educators – teachers should evolve into mentors and designers of learning while AI serves as a supplemental tool [82-84][176-180].
POLICY CONTEXT (KNOWLEDGE BASE)
UNESCO’s AI-in-education guidance and the US NEA policy both stress the teacher’s central, mentorship role while positioning AI as a supportive tool rather than a substitute [S61][S68][S73][S74][S75].
Urgent need to build AI literacy among teachers and bridge digital‑infrastructure gaps
Speakers: Patil Sir, Pankaj Sir
Urgent need to build AI literacy among teachers and bridge the digital‑infrastructure divide National programs (NPST, NMM) use AI to match mentors with mentees and support teacher development
Patil highlighted the scarcity of AI-savvy teachers and inadequate ICT infrastructure, while Pankaj described AI-driven mentor-matching programmes that aim to raise teacher capacity [221-227][192-195].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs on bridging the AI divide emphasize teacher training and broadband infrastructure as prerequisites for effective AI integration in schools [S56][S58][S59][S51][S61].
AI can dismantle language barriers and enable localized, multilingual solutions
Speakers: Speaker 2, Speaker 3, Patil Sir
India must become a global AI leader; AI can dismantle language barriers and expand access Collaboration among industry, academia and startups creates localized AI solutions (e.g., language translation) AI lab … translate Bhojpuri to multiple languages
All three participants pointed to AI-driven translation tools that break linguistic obstacles, from national-level AI labs to industry-academia startups delivering real-time multilingual support [138-144][292-309][214-218].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple reports underline AI’s potential for multilingualism and language preservation, including offline translation tools for inclusive societies [S53][S54][S55][S69].
Shift from a consumption‑driven model to a creation‑driven, problem‑solving nation, with AI as an economic catalyst
Speakers: Speaker 2, Suresh Sir, Patil Sir, Pankaj Sir
Shift from a consumption‑driven model to a creation‑driven, problem‑solving nation Advocates a shift … creation … Aim for “Vixit Bharat 2047”: AI as a catalyst for exponential economic growth and higher‑skill employment AI plus education can take us towards Vixit Bharat 2047
The speakers converged on the vision that India must move beyond passive consumption toward AI-enabled creation and problem-solving, positioning AI as a driver of rapid economic growth and a high-skill future [387-401][388-390][207-210][207-210].
POLICY CONTEXT (KNOWLEDGE BASE)
Strategic documents from emerging economies describe AI as a catalyst for moving toward creation-oriented economies and problem-solving capabilities [S48][S51][S70].
Students prefer human interaction over AI tutors; AI should be used as a supplementary aid
Speakers: Pranav Kothari, Professor KK Aggarwal, Pankaj Sir
Learners overwhelmingly prefer human interaction over AI tutors AI should augment creativity, not shortcut thinking AI is assistant not master
Survey data showed a strong preference for in-person teaching, and both academic speakers reinforced that AI is best positioned as a complementary resource rather than a full replacement [56-57][82-84][176-180].
POLICY CONTEXT (KNOWLEDGE BASE)
Empirical studies from Indian higher-education contexts report student preference for human contact and view AI as a supplemental resource [S67][S73].
Treat AI as a tool, not a human, to avoid ethical pitfalls and mental‑health risks
Speakers: Patil Sir, Professor KK Aggarwal
Ethical stance: treat AI as a tool, not a human, to avoid misuse and mental‑health risks AI should augment creativity, not shortcut thinking
Both speakers warned against anthropomorphising AI, emphasizing that it must remain a machine-based aid to safeguard ethical standards and user well-being [234-239][82-84].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy frameworks caution against anthropomorphising AI, highlighting ethical and mental-health considerations in educational settings [S75][S73][S68].
Establish AI‑driven regulatory and assessment mechanisms for education
Speakers: Pankaj Sir, Speaker 1
Establish AI‑driven regulator for teacher education; 70‑80 % of assessment to be AI‑based Emphasised that AI transformation demands rethinking institutional structures, curricula and governance
Pankaj proposed a new AI-centric regulator handling the bulk of assessments, while Speaker 1 called for broader governance reforms to accommodate AI’s impact [404-408][10-13].
POLICY CONTEXT (KNOWLEDGE BASE)
UNESCO and European policy initiatives call for dedicated AI assessment and regulatory mechanisms within education systems [S61][S62][S66][S68].
Similar Viewpoints
Both highlighted the need to move beyond incremental curriculum tweaks toward a comprehensive, skill‑oriented redesign driven by AI [10-13][78-84].
Speakers: Speaker 1, Professor KK Aggarwal
Emphasised that AI transformation demands rethinking institutional structures, curricula and governance Vision beyond incremental reforms: focus on skill‑based, problem‑solving curricula powered by AI
Both framed AI as a transformative, 360° shift that can break linguistic barriers and position India as a world leader [138-144][97-100].
Speakers: Patil Sir, Speaker 2
India must become a global AI leader; AI can dismantle language barriers and expand access AI represents a 360° paradigm shift; institutions that do not adapt will be fossilised
Both stressed that teachers must stay central to education while rapidly up‑skilling them in AI and digital tools [176-180][221-227].
Speakers: Pankaj Sir, Patil Sir
Teachers remain essential as mentors, ethical guides and facilitators of inquiry Urgent need to build AI literacy among teachers and bridge the digital‑infrastructure divide
Unexpected Consensus
Both government and industry see offline, on‑device AI translation as a key solution for language inclusion
Speakers: Patil Sir, Speaker 3
AI lab … translate Bhojpuri to multiple languages AI PCs run locally, offering voice‑to‑voice translation without cloud or internet
While Patil discussed a government-run AI lab providing multilingual translation, Speaker 3 highlighted Intel’s on-device AI PCs that deliver the same capability offline, showing an unexpected alignment between public and private sectors on offline AI solutions for linguistic inclusion [214-218][332-337].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions on preserving linguistic diversity stress the importance of on-device, offline translation to ensure equitable access [S54][S55][S53].
Overall Assessment

There is strong, cross‑sectoral consensus that AI is a transformative force demanding systemic redesign of education, preservation of the teacher’s mentorship role, massive capacity‑building to bridge digital divides, and leveraging AI for language inclusion and economic growth.

High consensus across government, academia and industry, indicating a solid foundation for coordinated policy actions and collaborative initiatives on AI‑enabled education.

Differences
Different Viewpoints
Extent of AI use in high‑stakes assessment and certification
Speakers: Pankaj Sir, Pranav Kothari, Professor KK Aggarwal
Establish AI‑driven regulator for teacher education; 70‑80 % of assessment to be AI‑based Major challenges: frequent hallucinations and low accuracy in logical/numerical tasks AI should augment creativity, not shortcut thinking
Pankaj proposes that the future teacher-education regulator should rely on AI for the majority of assessment (70-80 %) [404-408]. Pranav, citing his survey, warns that students regularly encounter AI hallucinations and that accuracy on logical or numerical tasks is low, casting doubt on the reliability of AI for high-stakes evaluation [46-48]. Professor Aggarwal adds a cautionary note that AI must not become a shortcut that erodes creative thinking, implying that heavy reliance on AI for assessment could be counter-productive [82-84]. The three positions therefore clash over how much AI should be trusted for formal assessment.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates around AI-enabled high-stakes testing reference emerging assessment standards and the need for safeguards, as outlined in UNESCO’s assessment guidelines [S61][S62].
Pace of AI rollout versus concerns about reliability and equity
Speakers: Patil Sir, Pranav Kothari, Speaker 2
Significant investment in AI Centres of Excellence, MOUs with tech firms, and AI schools on IIT campuses Major challenges: frequent hallucinations and low accuracy in logical/numerical tasks AI represents a 360° paradigm shift; institutions that do not adapt will be fossilised
Patil emphasises a ‘quantum jump’ in AI adoption, pointing to massive investments, AI centres of excellence and rapid user uptake (e.g., 5 crore users of ChatGPT in 40 days) [214-219][239-244]. Pranav, however, highlights persistent technical shortcomings-hallucinations and poor accuracy-that limit AI’s usefulness in education [46-48]. Speaker 2 frames AI as a 360° paradigm shift that will fossilise any institution that does not move quickly, thereby pressuring rapid adoption [97-100]. The tension lies between Patil’s optimism for swift, large-scale deployment and Pranav’s caution about current tool reliability and equity of access.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of the “hard power” of AI highlight a mismatch between rapid technology deployment and slower policy/equity safeguards, echoing concerns about digital divides [S70][S56][S58].
Governance model for AI‑enabled education: centralized AI regulator vs multi‑stakeholder, human‑centric governance
Speakers: Pankaj Sir, Speaker 1, Professor KK Aggarwal
Establish AI‑driven regulator for teacher education; 70‑80 % of assessment to be AI‑based Emphasised that AI transformation demands rethinking institutional structures, curricula and governance AI should augment creativity, not shortcut thinking
Pankaj advocates creating a new AI-oriented regulator that will automate most assessment functions [404-408]. Speaker 1, in the opening remarks, calls for a broader redesign of institutional structures, curricula and governance to address AI-driven transformation, implying a more distributed, policy-driven approach [10-13]. Professor Aggarwal stresses that AI must be used to supplement human creativity rather than replace it, suggesting governance that keeps humans central to decision-making [82-84]. The disagreement concerns whether AI governance should be highly automated and centralized or retain strong human oversight and multi-stakeholder input.
POLICY CONTEXT (KNOWLEDGE BASE)
International and European AI governance literature contrasts centralized regulatory approaches with distributed, multi-stakeholder models for education AI oversight [S49][S64][S65][S66].
Unexpected Differences
Optimism about AI’s ability to bridge language barriers versus caution about AI’s hallucinations and ethical risks
Speakers: Speaker 3, Pranav Kothari, Patil Sir
Collaboration among industry, academia and startups creates localized AI solutions (e.g., language translation) Major challenges: frequent hallucinations and low accuracy in logical/numerical tasks AI is a tool, not a human; misuse can cause mental‑health risks
Speaker 3 (Aditi) celebrates industry-academia collaborations that have already produced on-device, real-time translation tools, presenting AI as a near-ready solution for multilingual education [304-309][332-337]. In contrast, Pranav’s survey data highlight frequent hallucinations and low accuracy, especially for logical or numerical tasks, suggesting that AI outputs cannot yet be trusted for critical learning [46-48]. Patil adds an ethical warning that treating AI as a human can cause mental-health problems [234-239]. The unexpected tension is between a highly optimistic view of AI’s immediate linguistic benefits and a grounded concern about its current technical and ethical shortcomings.
POLICY CONTEXT (KNOWLEDGE BASE)
While AI promises multilingual inclusion, scholars warn of hallucinations and ethical pitfalls, urging responsible deployment [S53][S54][S55][S70].
Overall Assessment

The panel broadly concurs that AI will reshape Indian education, but the debate centres on how quickly and how deeply AI should be embedded. The most salient disagreements involve (1) the proportion of assessment that should be automated, (2) the speed of large‑scale AI deployment versus concerns about reliability, hallucinations and equity, and (3) the governance model—centralised AI‑driven regulator versus human‑centric, multi‑stakeholder oversight. These divergences reflect a tension between visionary, technology‑first strategies and cautionary, evidence‑based approaches.

Moderate to high. While there is a shared vision of AI’s strategic importance, the conflicting positions on assessment automation, rollout pace, and governance create substantive policy friction. If unresolved, these disagreements could lead to fragmented implementation—some institutions may push for rapid AI‑driven assessment while others retain traditional safeguards—potentially undermining the coherence of national AI‑education strategies.

Partial Agreements
All participants agree that AI must be incorporated into education and that it offers transformative potential. However, they diverge on the primary mechanism: Pankaj stresses teacher‑centred mentorship with AI as an assistant; Patil calls for systemic integration of AI across school and higher education; Aggarwal warns that AI should only augment creativity; Suresh pushes for a broader societal shift toward creation rather than consumption; Speaker 3 highlights industry‑academia‑startup collaborations to produce localized tools. The shared goal is AI‑enabled education, but the pathways—curriculum redesign, teacher‑mentor models, systemic integration, cultural shift, and private‑sector partnerships—are contested.
Speakers: Speaker 1, Pankaj Sir, Patil Sir, Professor KK Aggarwal, Suresh Sir, Speaker 3
Emphasised that AI transformation demands rethinking institutional structures, curricula and governance Teachers remain essential as mentors, ethical guides and facilitators of inquiry Integration of school and higher‑education ecosystems; AI becomes the “spine” of the education system AI should augment creativity, not shortcut thinking Shift from a consumption‑driven digital culture to a creation‑driven, problem‑solving nation Collaboration among industry, academia and startups creates localized AI solutions (e.g., language translation)
Both the survey presenter (Pranav) and the policy makers (Patil, Speaker 2) recognise that AI usage is already high and that India must act quickly. Pranav provides empirical evidence of 50 % adoption in private schools [25-26]; Patil and Speaker 2 argue for massive investment and rapid policy response [214-219][97-100]. The disagreement lies in the emphasis: Pranav focuses on documenting current usage and its limitations, while Patil and Speaker 2 stress large‑scale rollout and strategic positioning, with less attention to the quality concerns raised by the survey.
Speakers: Pranav Kothari, Patil Sir, Speaker 2
High usage among private‑school students (~50%) Significant investment in AI Centres of Excellence, MOUs with tech firms, and AI schools on IIT campuses AI represents a 360° paradigm shift; institutions that do not adapt will be fossilised
Takeaways
Key takeaways
AI adoption among private‑school students in Delhi is high (≈50%) and is used mainly for information search and writing assistance. Students view AI as helpful for exam preparation, but report frequent hallucinations and low accuracy in logical/numerical tasks. Across the panel, AI is seen as a supplement to, not a replacement for, human teachers; teachers should act as mentors, ethical guides and facilitators of inquiry. AI represents a 360° paradigm shift; institutions that fail to adapt risk becoming obsolete. India must aim to become a global AI leader, leveraging AI to break language barriers and expand educational access. Reimagining higher education requires moving beyond incremental reforms toward skill‑based, problem‑solving curricula powered by AI. Policy and governance need AI‑driven regulatory mechanisms (e.g., AI‑based assessment for teacher education) and national programmes such as NPST and NMM that use AI for mentor‑mentee matching. Building AI literacy among teachers and closing the digital‑infrastructure divide are critical prerequisites. Industry‑academia‑startup collaborations (e.g., Intel’s Unnati, AI‑enabled translation devices, AI Centres of Excellence) are essential for creating localized, offline AI solutions. A long‑term vision (Vixit Bharat 2047) envisions AI as the spine of the education system, driving economic growth, preserving Indian knowledge systems, and fostering a creation‑driven society.
Resolutions and action items
Launch and disseminate the “AI in School Education” report (completed during the session). Develop an AI‑driven regulator for teacher education, targeting 70‑80% AI‑based assessment (proposed by Pankaj Sir). Scale national programmes (NPST, NMM) that use AI to match mentors with teachers and students (Pankaj Sir). Introduce AI curriculum from Grade 3 onward to teach AI concepts and responsible use (Patil Sir). Create AI literacy and training programmes for teachers across schools and higher‑education institutions (Patil Sir, Pankaj Sir). Establish AI Centres of Excellence and MOUs with technology firms (e.g., IIT Madras, Intel) to provide tools, internships and localized solutions (Patil Sir, Speaker 3). Integrate school and higher‑education ecosystems through university‑school outreach programmes (e.g., COEP’s 100‑school engagement) (Patil Sir). Promote development of AI tools that operate offline/on‑device to reduce reliance on cloud and mitigate hallucination risks (Speaker 3). Encourage research ethics education and AI‑assisted content creation while ensuring human oversight (Pankaj Sir).
Unresolved issues
How to reliably address AI hallucinations and improve accuracy for logical/numerical tasks in school settings. Concrete strategies and funding mechanisms to bridge the digital‑infrastructure gap in rural and tribal schools. Standardised frameworks for AI‑based adaptive and personalized learning that outperform existing YouTube/ICT resources. Mechanisms for assessing AI‑generated content and ensuring ethical use without over‑reliance on AI. Ways to embed Indian knowledge systems and regional languages into AI models at scale. Long‑term governance model for AI regulation in education, including accountability and data privacy. Specific timelines and responsible agencies for implementing AI‑driven assessment and curriculum reforms.
Suggested compromises
Position AI as an augmentative tool that supports creativity rather than a shortcut that replaces thinking (Professor KK Aggarwal). Maintain human teachers as mentors and ethical guides while leveraging AI for routine tasks and content delivery (Pankaj Sir, Patil Sir). Combine AI‑based tools with traditional resources such as YouTube and ICT, acknowledging that current AI is less effective for personalized learning (Pranav Kothari). Adopt a “process‑rich evidence” approach to learning assessment rather than solely product‑based metrics (Pankaj Sir). Treat AI as a machine, not a human entity, to mitigate mental‑health risks and prevent over‑dependence (Patil Sir). Integrate AI across the education continuum (school to higher education) while preserving the distinct roles of each level (Patil Sir).
Thought Provoking Comments
Students report frequent AI hallucinations and incorrect information, and despite this, they still perceive AI as helpful for studying and exams, but overwhelmingly prefer traditional human interaction over AI tutors.
Highlights the paradox of high AI adoption alongside critical concerns about accuracy and the irreplaceable value of human teachers, grounding the discussion in real student experiences.
Shifted the conversation from abstract policy to concrete challenges; prompted panelists to address how AI should supplement rather than replace teaching and sparked subsequent remarks on AI as an assistant.
Speaker: Pranav Kothari
AI must supplement our creativity, not become a shortcut that reduces our thinking powers.
Introduces a nuanced caution that AI’s role should enhance, not diminish, human intellectual effort, framing the ethical dimension of AI integration.
Reoriented the debate toward preserving critical thinking; influenced later speakers (e.g., Pankaj and Patil) to emphasize mentorship, ethical use, and the need for AI governance.
Speaker: Professor KK Aggarwal
AI is a 360‑degree paradigm shift that will determine whether institutions become fossilized or become global leaders; it dismantles language barriers, enabling anyone to communicate in any language, and India must reimagine its education system for 2050‑2100 to become an AI leader.
Broadens the scope from immediate educational concerns to national strategic vision, linking AI adoption with economic destiny and geopolitical power.
Created a turning point that moved the panel from discussing current usage to long‑term systemic transformation; other panelists referenced this vision when outlining future institutional models and policy priorities.
Speaker: Speaker 2 (Ramanan)
AI will not replace teachers; teachers will become mentors, learning designers, and ethical guides, while AI serves as an assistant that requires supervision and governance rather than mere compliance.
Reframes the teacher’s role in the AI era, distinguishing between governance (compliance) and leadership (innovation), and stresses the need for AI‑assisted assessment and ethical standards.
Deepened the analysis of AI’s operational role in education, leading to discussions on AI‑driven assessment, regulator AI tools, and the importance of mentorship, influencing Patil’s and Aggarwal’s later points.
Speaker: Pankaj Sir
Comparing adoption timelines: telephone took 75 years to reach 5 crore users, whereas ChatGPT reached the same in 40 days—a quantum jump that creates massive challenges for infrastructure and equitable access.
Provides a striking quantitative illustration of AI’s rapid diffusion, emphasizing urgency and the digital divide, which reframes the conversation around scalability and policy response.
Prompted panelists to address infrastructure gaps, rural‑urban disparities, and the need for coordinated government‑industry action; it also reinforced the earlier point about AI’s transformative speed.
Speaker: Patil Sir
Intel is developing AI that runs locally on devices, offering voice‑to‑voice translation without internet, creating a 24‑hour, language‑personalized tutor that mitigates hallucination risks.
Introduces a concrete technological solution that directly tackles earlier concerns about hallucinations and language barriers, linking industry innovation to educational needs.
Shifted the dialogue toward practical implementations; other speakers referenced local AI tools as examples of how to safely integrate AI, and it reinforced the theme of AI as an enabling supplement.
Speaker: Aditi Nanda (Intel)
We must move from a consumption‑nation to a creation‑nation, shifting higher education from degree‑awarding to problem‑solving institutions, and interconnect primary, secondary, and tertiary systems.
Calls for a fundamental reorientation of the education ecosystem toward creativity, problem‑solving, and systemic integration, expanding the conversation beyond technology to pedagogy and societal outcomes.
Inspired subsequent remarks about interdisciplinary curricula, AI‑driven mentorship, and the need for seamless pathways across education levels; it reinforced the long‑term vision introduced earlier.
Speaker: Suresh Yadav
Overall Assessment

The discussion evolved from presenting survey data to a strategic, forward‑looking dialogue about AI’s role in India’s education system. Key comments acted as catalysts: Pranav’s data grounded the debate, Aggarwal’s caution about creativity, Ramanan’s grand vision of AI as a national paradigm shift, Pankaj’s redefinition of the teacher’s role, Patil’s stark adoption‑speed analogy, Aditi’s concrete local‑AI solution, and Suresh’s call for a creation‑focused, integrated ecosystem. Each insight redirected the conversation, introduced new dimensions (ethical, infrastructural, pedagogical, geopolitical), and prompted other participants to expand on or respond to these ideas, collectively shaping a nuanced consensus that AI should be a supervised, creative‑enhancing tool embedded within a reimagined, inclusive educational framework.

Follow-up Questions
How does AI differ from the earlier IT transformation in terms of challenges for higher education institutions?
Understanding the unique implications of AI compared to the past IT wave is essential for designing appropriate institutional strategies.
Speaker: Speaker 1 (to Prof. KK Agarwal)
How should educational institutions assess and address the rapid changes brought by AI and other emerging technologies?
A systematic assessment framework is needed to guide policy and practice in the face of AI-driven disruption.
Speaker: Speaker 1 (to Speaker 2)
How can the wide diversity of Indian institutions ensure a uniform and effective response to AI challenges despite disparities in resources, infrastructure, and expertise?
Equitable AI adoption requires strategies that work across varied contexts, from elite universities to remote schools.
Speaker: Speaker 1 (to Prof. Pankaj Arora)
What are the two (or three) key features or changes that should define the future of higher education in India?
Clarifying the vision for higher education will shape reforms, curricula, and governance for the AI era.
Speaker: Speaker 1 (to Prof. KK Agarwal, also addressed to other panelists)
What are the two (or three) essential characteristics that a future teacher‑education institution should embody?
Identifying core attributes will help redesign teacher‑training to align with AI‑augmented pedagogy.
Speaker: Speaker 1 (to Prof. Pankaj Arora)
How should the Ministry envision and drive transformation of future universities and higher‑education institutions in the age of AI?
Policy direction from the ministry is critical for scaling AI integration across the higher‑education ecosystem.
Speaker: Speaker 1 (to Patil Sir)
What is the prevalence of AI hallucinations among students, and what mitigation strategies can reduce their impact on learning?
Hallucinations compromise information accuracy; research is needed to quantify and address them.
Speaker: Pranav Kothari
Why is the accuracy of current AI tools lower for logical and numerical subjects, and how can performance be improved?
Students rely on AI for problem solving; enhancing accuracy in these domains is vital for trustworthy support.
Speaker: Pranav Kothari
How do AI platforms compare with traditional resources such as YouTube or ICT‑based learning in terms of learning effectiveness?
Understanding comparative efficacy will inform decisions about resource allocation and pedagogical design.
Speaker: Pranav Kothari
To what extent can free generative‑AI models provide adaptive, personalized learning compared with specialized AI tools?
Assessing personalization capabilities will guide investment in appropriate AI solutions for diverse learners.
Speaker: Pranav Kothari
What are students’ preferences for human‑teacher interaction versus AI tutors, and how does this affect learning outcomes?
Balancing AI assistance with human mentorship is crucial for effective pedagogy.
Speaker: Pranav Kothari
How can AI be leveraged to break language barriers in education, e.g., real‑time translation of regional languages?
Language accessibility is a major equity issue; AI‑driven translation could democratize content delivery.
Speaker: Speaker 2 (Patil Sir) and Aditi Nanda
What is the effectiveness of AI‑driven tools for detecting school‑student dropouts and supporting re‑engagement?
Early identification of at‑risk students can improve retention; empirical evidence is needed.
Speaker: Patil Sir
How can AI be deployed to support rural and tribal education where infrastructure and connectivity are limited?
Ensuring AI benefits reach underserved areas requires research on low‑resource implementations.
Speaker: Patil Sir and Aditi Nanda
What ethical guidelines and safeguards are needed to prevent bias, misuse, and over‑reliance on AI in education?
Ethical frameworks are essential to protect learners and maintain trust in AI systems.
Speaker: Multiple panelists (Patil Sir, Pankaj Arora, Prof. KK Agarwal)
What impact does introducing an AI curriculum at early grades (e.g., third grade) have on students’ understanding, attitudes, and future skill development?
Early exposure may shape AI literacy; systematic study is required to gauge outcomes.
Speaker: Patil Sir
Can offline, device‑based AI solutions (e.g., AI PC) effectively deliver educational content in low‑connectivity settings, and how do they compare with cloud‑based models?
Local AI processing could reduce hallucinations and connectivity barriers; evaluation is needed.
Speaker: Aditi Nanda
How can AI be used to supplement creativity rather than become a shortcut that diminishes critical thinking?
Ensuring AI enhances, not replaces, creative cognition is vital for long‑term educational quality.
Speaker: Prof. KK Agarwal
What is the projected contribution of AI to India’s economic growth by 2050/2100, and what institutional changes are required to realize this potential?
Linking AI adoption to macro‑economic outcomes informs national‑level policy and investment.
Speaker: Speaker 2 (Suresh Yadav)
How can AI be integrated into assessment and evaluation processes for teacher‑education and student learning, and what are the challenges?
AI‑based assessment could increase efficiency, but reliability and fairness need investigation.
Speaker: Prof. Pankaj Arora
What role should AI play in curriculum development, and how can appropriate human supervision be ensured?
Curriculum design must balance AI automation with expert oversight to maintain relevance and quality.
Speaker: Prof. Pankaj Arora
How can AI‑driven skill development programs be aligned with future job market demands to reduce skill obsolescence?
Mapping AI‑enhanced competencies to emerging occupations is necessary to future‑proof the workforce.
Speaker: Multiple panelists (Prof. KK Agarwal, Suresh Yadav, Patil Sir)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI Innovation in India

Session at a glanceSummary, keypoints, and speakers overview

Summary

The AI Impact Summit opened with host Tarunima Prabhakar introducing three young innovation champions who would share their entrepreneurial journeys [1-4]. Adhiraj Chauhan, an 11th-grade student, presented Delta AI Revolution, an AI-driven mental-health support platform that addresses the shortage of psychiatrists in India by offering therapy techniques for over 100 disorders and has already partnered with clinics and the Delhi Psychiatrist Association while shifting toward a B2C model [5-23]. He credited the Atal Innovation Mission’s Tinkering Lab, Intel’s mentorship, and government funding for enabling his prototype and MVP development [9-13][23-24]. Shreenidhi Baliga described “Charades,” a glove that converts sign language to speech and speech to Braille to assist the deaf-blind community, built using deep-learning models trained on thousands of images and supported by the Tinkerpreneur Challenge and Intel mentorship programs [27-34]. Jaiwardhan Tyagi, who recently appeared on Shark Tank India, outlined his Neuropex technology that combines multimodal vision-language models for radiology and dermatology to handle distribution shifts and generate clinical reports, emphasizing a framework that reasons across modalities rather than single-task classifiers [37-55][56-58]. He also highlighted the need for systems that can adapt to new imaging contrasts and avoid hallucinations, positioning his work as a solution to these challenges [44-46][47-50]. Atal Innovation Mission director Deepak Bagla celebrated the mission’s 10-year anniversary, describing AI as a “delta multiplier” that will empower India’s growing population and stressing the urgency of reskilling the workforce for the next decade [65-84]. Intel Vice-President Sarah Kemp praised the young technologists, affirmed India’s people as the nation’s superpower, and urged responsible AI development that puts humanity first, while thanking the Indian government and partners for their support [112-130]. Ojaswi Babbar then presented the mission’s evaluation framework for AI innovations, which includes rapid validation, controlled corporate pilots, revenue-model optimisation, and strategic investment to scale promising solutions [148-172]. He emphasized that successful Indian AI ventures must combine domain depth, proprietary data, and access to national infrastructure to achieve global impact [173-174]. Gaurav Dagaonkar introduced Hooper, India’s first native music-licensing platform that uses multimodal AI to tag songs by mood and match them with brand needs, facilitating legal and ethical licensing for creators and brands alike [207-236]. He noted that Hooper’s AI layer processes audio, creates metadata, and connects major labels and influencers with over 220 brands, positioning the platform within the Atal Innovation Mission ecosystem [221-227]. The summit concluded with the unveiling of the Tinkerpreneur Compendium and the recognition of the top 50 AI tinkerpreneurs selected from 3,500 applicants, a process supported by Intel and the Atal Innovation Mission [131-138][242-250].


Keypoints

Major discussion points


Showcase of youth-led AI innovations – Three young innovators presented their projects:


• Adhiraj Chauhan described “Delta AI Revolution,” an AI-driven mental-health support platform addressing the psychiatrist-to-population gap [14-16][5-24].


• Shreenidhi Baliga demonstrated a glove that converts sign-language to speech and speech to Braille for the deaf-blind, built with machine-learning models from the Tinkerpreneur boot-camps [27-34].


• Jaiwardhan Tyagi (Jaywardhan) explained his “Neuropex EIS” system for radiology and dermatology, highlighting challenges of distribution-shift in medical AI and a multimodal reasoning framework [42-48][51-63].


Atal Innovation Mission (AIM) ecosystem and partnership support – The summit emphasized AIM’s 10-year milestone, its role as India’s largest grassroots innovation mission, and the collaborative backing from Intel, the Ministry of Electronics & IT, and other partners that provide mentorship, funding, and validation for young entrepreneurs [65-70][73-84][112-130][148-174].


Broader AI challenges and opportunities for India – Speakers framed AI as a “delta multiplier” for national growth, citing the mental-health workforce shortage, the need for AI systems that remain robust under distribution shifts, and the responsibility of Indian technologists to drive ethical, people-first AI [14-16][42-45][76-84][118-125].


Introduction of a commercial AI-driven music-licensing platform – Gaurav Dagaonkar presented “Hooper,” India’s first native music-licensing marketplace that uses multimodal AI to tag songs, match them with brand needs, and ensure legal, ethical royalty distribution [187-236][239-241].


Ceremonial recognition of top “tinkerpreneurs” – The event concluded with the unveiling of the Tinkerpreneur Compendium, awarding certificates to the 50 selected students from ~3,500 applicants, and a group photograph, underscoring community celebration and future commitment [131-146][255-280].


Overall purpose / goal


The summit aimed to celebrate the 10-year anniversary of the Atal Innovation Mission by highlighting and rewarding youth-driven AI solutions, showcasing the supportive ecosystem (AIM, Intel, government), and inspiring a broader conversation about responsible, impact-focused AI development in India.


Overall tone and its evolution


– The opening was enthusiastic and celebratory, with Tarunima’s warm welcome and applause for the young champions [1-4].


– During the innovators’ presentations the tone shifted to informative and technical, focusing on problem statements, prototype details, and future roadmaps [5-63].


– Deepak’s and Sarah’s remarks introduced a visionary and motivational tone, emphasizing AI’s societal impact, India’s growth potential, and the responsibility of technologists [65-84][112-130].


– Ojaswi’s segment added a pragmatic, evaluative tone, outlining concrete frameworks for validation, pilots, and scaling [148-174].


– The closing ceremony returned to a festive and appreciative tone, celebrating achievements and reinforcing community spirit [131-146][255-280].


Overall, the discussion moved from celebration → technical showcase → strategic vision → practical evaluation → ceremonial acknowledgment, maintaining an upbeat and forward-looking atmosphere throughout.


Speakers

Tarunima Prabhakar


– Area of Expertise: Event moderation, AI & innovation advocacy


– Role: Event moderator/host


– Title:


Adhiraj Chauhan


– Area of Expertise: AI-driven mental health support, entrepreneurship


– Role: Founder & CEO of Delta AI Revolution


– Title: Founder & CEO, Delta AI Revolution


Shreenidhi Baliga


– Area of Expertise: Assistive technology for deaf-blind (sign-language glove)


– Role: Student innovator


– Title:


Jaiwardhan Tyagi


– Area of Expertise: AI in healthcare (radiology & dermatology vision-language models)


– Role: Founder / entrepreneur (Neuropex)


– Title:


Deepak Bagla


– Area of Expertise: Innovation ecosystem leadership, policy


– Role: Mission Director, Atal Innovation Mission


– Title: Mission Director, Atal Innovation Mission[S11][S12]


Sarah Kemp


– Area of Expertise: Government-industry relations, AI policy, corporate leadership


– Role: Vice President International Government Affairs, Intel


– Title: Vice President International Government Affairs, Intel[S4]


Ojaswi Babbar


– Area of Expertise: Startup evaluation, incubation & acceleration frameworks


– Role: Speaker / evaluator (Atal Innovation Mission)


– Title:


Gaurav Dagaonkar


– Area of Expertise: Music licensing, AI-enabled audio tagging and recommendation


– Role: Co-founder & CEO of Hooper AI


– Title: Co-founder & CEO, Hooper AI


Shubham Tribedi


– Area of Expertise: Event coordination, certificate distribution


– Role: Event coordinator


– Title:


Additional speakers:


(None – all speakers in the transcript are accounted for in the list above.)


Full session reportComprehensive analysis and detailed insights

The AI Impact Summit opened with host Tarunima Prabhakar inviting three young innovators, describing them as “very special young innovation champions,” and asking them to share their journeys [1-4].


Adhiraj Chauhan – an 11th-grade student and founder-CEO of Delta AI Revolution – explained that India’s mental-health system suffers from a severe psychiatrist shortage (≈ 1 psychiatrist per 100 000 people) [14-16]. He named his platform “Delta” to signify change [5-8] and described it as an AI-driven system that delivers a range of therapy techniques for more than 100 mental-health disorders [17-18]. The startup already supplies clinics such as Dr Mora Psychiatric Clinic, is in talks with the Delhi Psychiatrist Association, has reached roughly 20 clients, and is shifting from a B2B to a B2C model [19-23]. He thanked the Agile Innovation Mission and Intel for the opportunity, as well as his school, the Ministry of Electronics & IT, and other supporters [9-13][23-24].


Shreenidhi Baliga – a student from BG’s National Public School, Bangalore – highlighted her project Charades, a glove that converts sign-language gestures into speech and speech into Braille to aid the deaf-blind community [31-33]. The glove’s deep-learning models were trained on thousands of images, a development enabled by the Tinkerpreneur Challenge boot-camps, mentorship from the Agile Innovation Mission and Intel, and additional guidance from the summit’s mentoring sessions [33-34]. She expressed gratitude to all partners who facilitated the project [35].


Jaiwardhan Tyagi (also referred to as Jaywardhan) – recently featured on Shark Tank India where he secured funding from Sir Raman Gupta and a founder fellowship from Sir Ritesh Agarwal [37-38] – framed his work within the evolution of AI in healthcare, comparing early radiology AI to a metal detector and today’s systems to a full airport security suite [38-40]. He warned that “distribution shift” causes vision-language models to hallucinate when faced with new imaging contrasts, attributing the issue to an “obsession with scaling” rather than model architecture [42-46]. His solution, the Neuropex EIS technology, comprises separate radiology and dermatology pipelines that combine dynamic MRI sequencing, CLIP-style retrieval-augmented models, and multimodal reasoning to generate real-time clinical reports [51-55][58-60]; a related pipeline, DeepDom, uses a visual-language model trained on histopathology data to answer clarifying questions and produce reports, and is live for sign-up on the Neuropexia site [61-63]. He positioned his work as aligned with India’s goal of leveraging technology for outcome-driven impact [62-63].


Mission Director Deepak Bagla celebrated the 10-year anniversary of the Atal Innovation Mission (AIM), describing it as “the world’s largest grassroots innovation mission” that has nurtured over a crore (10 million) young entrepreneurs through 10 000 tinkering labs [S66-S68]. He noted that the mission is currently seeking to create about one million jobs per month and that 12-, 13- and 14-year-olds are already being prepared to take on emerging tasks [S66-S68]. Bagla warned that the next decade will demand massive reskilling, as mental-health challenges and rapid technological disruption will require a workforce capable of continual learning [65-68][73-84]. He described AI as the “biggest delta multiplier” for India [76-80] and reiterated that AIM, together with partners such as Intel, will continue to provide the ecosystem needed for young innovators to thrive [70-71].


Intel Vice-President International Government Affairs Sarah Kemp thanked the audience for the rare chance to “make a difference,” praised the ten-year journey of the summit, and invited all “future technologists” to stand [112-116]. She highlighted India’s “superpower” as its people, lauded the government’s supportive AI framing [118-121], and stressed that AI must be “people-first,” urging the next generation to wield talent responsibly for societal good [122-125]. Kemp expressed optimism about the partnership between Intel, AIM, and the innovators and looked forward to another decade of collaboration [126-130].


Following Kemp, Ojaswi Babbar presented the AIM evaluation framework for AI innovations. He outlined four pillars: rapid validation (including stress-testing feasibility) [155-158], controlled corporate pilots, optimisation of revenue models, and access to strategic capital [164-170]. He emphasized the philosophy “fail fast, but we need to fail forward” [162-164] and argued that successful Indian AI ventures must possess deep domain expertise, proprietary data that creates barriers to entry, and the ability to leverage national infrastructure for distribution [173-174]. He positioned AIM and Intel as key strategic investors that can help startups scale from “0 to 11” [171-172].


Gaurav Dagaonkar, co-founder and CEO of Hooper, introduced India’s first native music-licensing marketplace. Hooper uses a multimodal AI stack to process raw audio, generate tags such as mood, and match songs with brand requirements, ensuring legal and ethical royalty distribution [228-236]. The platform hosts major labels (e.g., Yash Raj Films, Universal Music) and artists (including A.R. Rahman), serves over 220 brands and 300 000 influencers [222-227], and is already used by prominent creators such as Ranveer Brar, Ashish Vidyarthi, Sadhguru, and the YouTube channel of Maharashtra’s Chief Minister Devendra Fadnavis, with plans to soundtrack the Prime Minister’s social-media content [221-227]. Dagaonkar illustrated how AI-generated metadata enables brands like Baskin-Robbins or Himalaya to discover suitable tracks and invited creators to build derivative works on top of Hooper’s licensed catalogue [233-241]. He highlighted Hooper’s integration within the AIM ecosystem and its role in fostering a responsible creative economy [221-227].


The ceremony segment saw Tarunima announce the unveiling of the Tinkerpreneur Compendium, inviting dignitaries and the three young champions to the stage [131-138]. Deepak Bagla and Sarah Kemp jointly felicitated the awardees, acknowledging Intel’s support in training, mentoring, and selecting the top 50 AI tinkerpreneurs from roughly 3 500 applicants [242-250][131-138]. Shubham Tribeedi coordinated the certificate distribution, calling students and mentors from numerous schools (e.g., DAV Centenary, Infant Jesus, Vidyashil, Radiant International, KVIISC, Silver Oaks) to the front for a group photograph [255-280].


Overall, the summit celebrated AIM’s decade of fostering grassroots innovation, showcased youth-led AI solutions across mental health, accessibility, medical diagnostics, and music licensing, and outlined a clear pathway-from rapid validation to scaling-supported by government, corporate (Intel), and mentorship partners, reinforcing AI’s potential as a “delta multiplier” for India’s socio-economic development [76-80][155-170][242-250].


Session transcriptComplete transcript of the session
Tarunima Prabhakar

For our next very special, I would like to call upon three very special young innovation champions on the stage and share their experience. We have with us Srinidhi Bagla, Jai Vardhan and Adhiraj. Please come on the stage and share your journey. Thank you.

Adhiraj Chauhan

Hello, my name is Adhiraj Chauhan. And I’m a high school student of 11th grade. And I’m the founder and CEO of Delta AI Revolution, Delta standing for change. The reason my company is called Delta AI Revolution is because I’m a very, very passionate entrepreneur who believes in the intersection of solving societal issues with modern day technology. So I firstly like to extend my heartiest thanks to the Atil Innovation Mission. It is in their Atil Innovation Tinkering Lab, which I started my project and created my first MVP. Also to Intel for providing support. It’s important mentorship and to my very own school who’s provided. We support and been there with me every step. So my journey started when I realized that amongst the youth in our country, mental health is an epidemic.

And despite a lot of efforts because of a large population, the ratio of psychiatrists to people is one psychiatrist for 100 ,000 people. So my startup is a mental health support platform. It is an AI -driven platform training different therapy techniques ready to cater up to more than 100 disorders. We provide our platform to different psychiatrists firms such as Dr. Mora Psychiatric Clinic. And we are also in talks with the Delhi Psychiatrist Association. We provide our platform to them which they can provide to their clients. We’ve touched over almost 20 clients right now. We’re shifting to a B2C model. I’d also like to thank the Ministry of Electronics and IT who has provided me funding. And again, I’d like to thank Agile Innovation Mission and Intel for providing me this opportunity as a young innovation leader and a young entrepreneur.

Thank you so much.

Tarunima Prabhakar

Thank you. Shreenidhi please come on stage and share your experience.

Shreenidhi Baliga

Hello everyone myself Shreenidhi from BG’s National Public School Bangalore. I’m very grateful for everyone who’s been part of organizing the summit for giving us this wonderful opportunity of being here and presenting our project. It gives us confidence to build something new and gives us confidence that people believe in the youth today and innovation just doesn’t depend on age it depends on intent. So my project is basically charades named after a game which most of us might be knowing dumb charades where where the players are supposed to explain a movie or a song name without using speech and only hand. I decided to name my project charades because this is because the game is similar to something similar to what we try to help.

It is a glove that converts sign language to speech and speech to Braille trying to help the deafblind community. Right now we have developed our models over thousands of images using machine learning deep learning and all of this was possible only because of the boot camps from Tinkerpreneur Challenge, the mentorship programs from Atul Tinkerpreneur and Intel, Neeti Aayog, the mentoring sessions held by the summit organizers and we’re really thankful for everyone who has been part of this summit. Yeah that is everything I would like to say right now. Thank you.

Tarunima Prabhakar

We have our next next innovator and I don’t want to introduce him he’ll introduce himself and it’s going to be a very surprising and his journey is very surprising and let me call him on stage

Jaiwardhan Tyagi

thank you ma ‘am and hello everyone I am Jaywardhan Tyagi so if I’m a bit clean I just recently got appeared on Shark Tank India where I secured funding from Sir Raman Gupta founder of Boat Lifestyle and a founder fellowship from Sir Ritesh Agarwal who is the founder of Oyer Rooms so yeah to start with like let’s describe myself on broader spectrum I am an engineer I’m a student and I am a reader so so so broader AI in healthcare has evolved structurally over the recent decade. Like, if I had to describe radiology AI in 2016, it would be like a metal detector at an airport. But today it’s like a full airport security system with a CT scanner, with behavioral analytics and security cameras and, you know, all.

So, we have seen amazing benchmarks, especially from University of Florida recently, this year and the previous year’s end. And we have seen great progress in medical vision language models. But the question that matters isn’t how well these models perform on these curated benchmarks. It is, will they maintain this performance when the distribution shift is introduced? So, the distribution shift is like some edge cases, which are not so substantial. Like, if we talk about radiology, an input from a newly installed MRI, with a different contrast that can be considered uh considered as a distribution shift actually vision language models today uh are very poor on handling those distribution shifts they hallucinate a lot so basically uh the problem isn’t the architecture itself but it’s the thinking that okay a single model has the power to understand like every part of every dynamic of human health which is of course possible and you know but this is less than a technical necessity and more like uh you know obsession with scaling so yeah so basically um what we have derived it’s it’s almost like thinking a transcription model which doesn’t take audio as an input but takes video frames and just just try to determine what the person is saying from those videos it’s possible but inefficient so what’s the solution there The solution is a system or a framework that reasons across modalities and refers to previous conclusions, contradicts them, and finally describes them all in an understandable manner rather than a clinical report.

So, yeah, it turns out I’m working on the same thing. So, yeah, so before I describe Neuropex EIS technology, as it appeared on Shaktang, it’s good to first clarify what it is not. So it’s not a classifier for, you know, narrow disease prediction tasks. It’s not a standalone VLM with a reporting layer attached, and it’s not an orchestration on a GPT. So now let’s, like, discuss what it is really. So we have two pipelines. One is for radiology, and one is for dermatology. A radiology pipeline has dyno plus clip plus retrieval augmented vision language models, which actually… are able to understand multiple sequences of MRIs and can read the x -rays as well and can describe them in real time using clinical language.

It’s still in the active development when it comes to structuring those findings, but yeah, it’s still in the game. So the older radiology pipeline, which near the shark tank time, that was like a segmentation model, which took in 3D MRI files and just segmented the three tissues, CSF, gray matter, and white matter tissues in the brain. So what happens is when you have those tissue segmentations and you have those proportions, you can actually risk for a wide variety of neurological disorders. That was all of the radiology pipeline. I plan to actually show the demo as well, but we have time constraint. Yeah. So. Let’s talk about deep down then. Deeddom has a visual language model that’s trained on Demoscopy, Clinical and Histopathology Datasets and So you first describe your problem Vocally and then you answer A clarification question And then it just generates a report And it’s live out there, you can just sign up on the Neuropexia Site.

So let’s cover up It seems no less than a mission And this mission aligns with India’s goals of Leveraging technology for an outcome Driven impact. And yeah, it turns out We’ll be working on it So yeah, thank

Tarunima Prabhakar

Thank you so much I would now like to invite our mission director Atal Innovation Mission to have a few words And address the audience

Deepak Bagla

Thank you Thank you Thank you Thanks Arunima Such a pleasure seeing you all here So many partners You know it’s amazing Were you guys listening to what they were saying These kids It’s unbelievable You know I just finished A session, this was on the future of work And I was coming, there were four of us And I was telling them the biggest challenge For us will be The first is I asked them to raise hands Of how many people have been laid off There was only one And I told them He’s the only person ready for the next 10 years It’s very important And you know the problem We are trying to solve on mental health That is going to be the biggest challenge Going forward The disruption is so immense That the ability To re -skill and re -do ourselves Is going to be so high And it’s going to be generational So I think people who have just gone into the workforce And at least for the next 10 odd years Otherwise which are going to face the brunt of it.

And that’s where things like this are going to be critical. But what I was saying there is, and which is going to happen here, in the next 96 hours, Sarah, you and I will celebrate our 10th year of the journey. But more importantly, we will also celebrate the 10th birthday of the Atal Innovation Mission. And just imagine, it is a 10 -year -old, which is today the world’s largest grassroots innovation mission. It’s unbelievable. And this is where you’re seeing what is happening. See the results. These are the ones which are now just going to take on that new India. And that is what I was saying there, that the big challenge is not going to be creating jobs, because just now, we are looking for 1 million jobs a month, right?

So far. Now we will have 12 and 13 and 14 -year -olds ready to take on tasks. We are fossilized completely. And the point here remains that that is where I say two points. The biggest delta multiplier of AI, the benefactor of this is India. The biggest benefactor of AI as a delta multiplier is India. I’ll tell you why. 1 .4 billion will be 1 .6 by 2060. 1 .6 billion people completely empowered. And starting from a low income to shoot up to be one of the biggest economies of the planet. You see the delta? We finally have a delta. We have a tool which is going to make that happen. thing is, for some of us, ma ‘am, we might actually see it happen in our own lifetime.

It is going to be so fast. It is so rapid. And the biggest benefit there which comes is two things about India, which are our biggest strengths. Think about it. The first is the ability to work in an unstructured environment without a playbook. You showed it. The way worst example in human history which happened, the biggest calamity was COVID, the pandemic. There was no playbook. You did not know what to do with it. You emerged as the strongest economy within COVID. You did it. It was unstructured. No one in the world had a playbook. The biggest strength of all of you, and we look up to you as the future which you are, and you’re going to be creating.

the superpower of the world, the biggest strength of India is getting a job done, regardless of the resources available. Ask an Indian, he will get the job done. And Sarah, that is what is the strength of this Jagannath. Guys, today it is about you. Really fantastic. And you know, we are so lucky we have our old partners with us, who started with us right away. Thank you, ma ‘am. Thank you, right from the beginning. You, Sarah. Thank you for walking this journey with us. It’s a long way to go. We have a lot to do. And we are all with you and behind you. And actually looking forward and looking up to all of you.

So thank you for making us proud. Very well done. And your presentation? remarkable. Thank you. Thank you very much.

Tarunima Prabhakar

Thank you so much, sir. Now we have with us a very special guest, Mrs. Sarah Kemp. She is the Vice President International Government Affairs, Intel. I would request,

Sarah Kemp

Good afternoon. It’s not very often in your life that you get an opportunity to make such a difference. And so I want to start by saying thank you, because this journey of 10 years has been life -changing to all of us. And I want to start also by asking all of our technologists, to start… our future technologists, to stand up so that we can properly thank you. So all of the future technologists in the audience, I see you all with your – Stand up. Thank you. You are inspirational, and you are what gives me hope for the future. When I read the headlines and I get a little pressed, I take out my Changemaker brochure that has all of your projects in it, and I think, wow, there is hope for the future.

And I would also echo, I think India’s superpower is absolutely its people, and it is what’s going to make a difference. And I also want to say that I am so grateful to the Indian government for their support. For how they have – teed up and how they are framing AI. At this summit, not only is this summit making history because it is the first summit in the global south and it’s going to lead the global south and India is going to lead that, but what I’m really excited about is the heart and the human that India has put at the center of AI and making sure that the AI is to help people first and foremost.

And so to our future technologists, we put on you a great responsibility because with great talent comes great responsibility. You are looking at the future and you are looking at the future and you are looking at leading us forward. You have the ability to make the society you want, to make us a better version of ourselves by using AI. for good. And I just want to say I am very excited because I have great ambitions for all of you. But with that, I do want to just thank all the partners and look forward to another 10 years. And before we know it, we’ll be there. And I just want to say again, thank you. On behalf of Intel, it has been an incredible honor to be able to be a small player in this.

So, thank you.

Tarunima Prabhakar

Thank you so much, ma ‘am. That was really inspiring. I would also like to mention that, you know, we have top 50 students present, you know, AI tinkerpreneurs present with us. And they were shortlisted by Intel and Atal Innovation Mission by rigorous evaluation. And they were trained and, you know, mentor session was done. So, I would request the dignitaries on the stage to unveil the tinkerpreneur compendium. ma ‘am sir can you please unveil the tinkerpreneur compendium yes we can also have the three young innovation champions jaywardhan srinidhi adhiraj to come can we also have hufeza salim yes on the count of three you can open the ribbon three I see very less energy, you know. Thank you.

Thank you so much. We actually have… So, you know, as our mission director just said that this is the 10th year of Atal Innovation Mission, I mean, like, we, everyone here should be very excited about it because something that you’re seeing right now is being seen only by you. Nobody here has witnessed… the logo of 10 years of Atal Innovation Mission. What they are holding in their hands is the logo of 10 years of Atal Innovation Mission. Can we have a huge round of applause from the crowd? We are also going to play a video. We are also going to play a video. Thank you so much, sir, for joining us. Okay, let’s move on to our next session.

We have a very special address by Mr. Ojasthi Babbar. Can you please come on stage and

Ojaswi Babbar

identify whether each one of the AI innovations which are happening all across are actually worth backing or not. Otherwise, it’s all noise, all hype, and we try to stay distant from them as such. But having said that, this is the framework for our evaluation. What exactly do we do? Once somebody passes on this, off with this captive network framework, how exactly do we help? How exactly does one incubate, accelerate, and invest? And what kind of value addition do we bring in while we have spoken about them bringing in that kind of value? The first one is rapid validation, if you move on to the next slide. The incubator, the accelerator, and as an investor, we help in rapid validation of these ideas.

The earlier side, though, but we can probably lock on to this slide as well. So we help you stress test that particular feasibility. We help you stress test whether your particular solution would actually work in the real work or not. By bringing in the right corporate client, by bringing in the right pilot partners as such and making sure that the rap… So at the incubator and at accelerator, we have a philosophy. We say we need to fail fast, but we need to fail forward. We need to learn quickly, iterate quickly, and move fast. The second one is, of course, of the controlled pilots that we bring in through our corporate partners. We have a corporate adoption program which we utilize wherein a lot of corporate partners plug into the incubator to give in problem statements which are solved by different entrepreneurs at each one of the different levels.

So that is one of the other programs that we have. Post that, there’s a litmus test that we do and that we help out with is by making sure that there is the right revenue model associated with each one of the startups that actually present and that are actually incubated as such. And here in terms of… These revenue models, we help them optimize the inference cost. I think we’re short of time so the essence is to ensure that we make sure that there’s enough revenue which is coming in, the revenue model is right and tight and that can move forward and get to a global scale level as such and of course the last one being making sure that once you’re growing you would be in need of capital and that capital comes in with the right partners, the right strategic investors and the other stakeholders as such.

Stakeholders like Atal Innovation Mission, like Intel would probably play a very important role when you’re scaling up from 0 to 11. So moving forward that’s the last slide that we have that is actually the gist of our entire AI thesis as such. We believe that any AI innovations which would actually thrive in an Indian ecosystem would have domain depth they would have the right proprietary data which they would utilize to create a mode, a barrier to entry as such and given multiple returns and relevant returns as such and of course having infrastructure railroads like that we have in the country as such if they can utilize the distribution access that we have I think we have a winning equation right in our hands for all AI innovations all across.

In the interest of time I’ll just stop up there. Thank you.

Tarunima Prabhakar

Thank you so much sir. I would now request Ms. Sara to please felicitate Mr. Rojasvi. Can you please come on stage sir? Can we have a round of applause? Can we also have Adhiraj and Jaywardhan to come on stage? We would like to honor you with something for being such good innovation champions. Sara ma ‘am if you could do the honors. Thank you so much. We now have our next speaker. He is the founder, co -founder and CEO of Hooper AI, Mr. Gaurav Dagongar. Can we have a huge round

Gaurav Dagaonkar

Since I know we’re pressed for time, I’ll get going right away. I must say I was extremely happy today to come here to get a chance to talk about Hooper. But I think what’s made me really happy is sitting right in between Jayavadar and Srinidhi. I don’t think I’ve felt that energized. I’ve felt energized in a long, long time. Since we are a music technology company, let’s do this a little differently. How many of you recognize this tune? You get it, right? Thank you. Had to. This song released 50 years ago, more than 50 years ago, composed by R .D. Burman, written by Anand Bakshi, sung by the great Kishore Kumar. In 2016, and the reason I had to bring this up is I happened to make a cover version of this song that became really popular.

A few years later, a mint brand launched in India using this cover version as their audio campaign. And as I checked last week, over 100 startups have still used this in the last three months. To promote their product or their brand. Now the question is, have they got a license? Did Anand Bakshi get paid? Did R .D. Burman get paid? A little selfish, did I get paid? A lot of youngsters here who will make covers or who will make originals in the future need to ask this question. And that’s what we do. I’m Gaurav Dagaonkar. I’m the co -founder and CEO of Hooper. And I’ve made my passion my profession. I graduated from IIM Ahmedabad and became a music director.

So for a long time, I made music for films. I’ve had the fortune of having folks like Arijit Singh, Sonu Nigam, Shreya Ghoshal sing my songs. But after 10 years in the music industry, what I felt was, one, India loves its music. Whether it’s our films, whether it’s TV, whether it’s ads, all the deals, 6 million reels we consume daily, they run on music. And yet, when it comes to music rights and music licensing, there seems to be no knowledge. That’s an opaque space. I bet if there’s any entrepreneur in this room, who is using a Bollywood song, do you know how many licenses you need? The better question is, I don’t think, did you even know you needed a license in order to use it, right?

And that’s what we’re solving. Before we built Hooper, India did not have a single platform, even one, that could actually license music. It gives me great pride to say that Hooper is India’s first native, homegrown music licensing platform. And of course, we are a part of the Atal Innovation Mission ecosystem, so that makes me extremely happy. In a nutshell, we are a marketplace, where on one side, the largest labels, the largest artists come and list their songs. So you have folks like Yash Raj Films, Universal Music, even people like A .R. Rahman, next week we’ll get Hanuman Kind, listing their songs. And on the other side, it’s basically brands who come and like it. the music.

Over the last couple of years, we now have over 3 lakh of India’s biggest influencers and 220 brands that are licensing music from us. And it works in a very, very simple manner where the song gets uploaded on the platform, a brand discovers it, licenses it, and the royalty or the revenue goes to the artist. Beneath all of this is our AI infrastructure layer. And it works in a really cool manner. First, when a song comes in, we process that raw audio. We use a multimodal AI there to create different tags such as mood. Is this a song? Is the song a happy song, a sad song? Will it go for a fashion brand or a sports brand?

We also use LLMs to understand brands and try to create some kind of a fingerprint for every brand. And then we try and match the two. What music would work, say, for Baskin and Robbins? What music would work for a Dairy Day? What music would work for a Baskin and Robbins? What music would work for a Baskin and Robbins? mantra and so on that’s essentially what we have done and now it gets exciting because we’ve legally licensed music from authors and composers we can now build on top of it what if say for Mahendra Thar I want to create a hip -hop remix and I want to do it legally and ethically so that the artist gets paid and I think that is where I would love to invite many of you who probably have music as a passion and would want to build on top of the Hooper stack that’s a bit on our AI layer I love doing you know I love my job because on one side we’ve got the largest creators using the platform be it folks like Ranveer Brar, Ashish Vidyarthi, Sadhguru, the Chief Minister of Maharashtra Mr.

Devendra Fadnavis’s YouTube channel uses Hooper and I hope that this year we also get a chance to soundtrack our Honorable Prime Minister’s social media content and videos and apart from that we also have brands large brands like Himalaya, Myntra, Mariko as well as startups that use the platform I’ll just take half a minute to play you a short audio visual that will give you a glimpse of what Hooper has done in the Indian soundtracking ecosystem If the visual doesn’t load I believe it might be better I just want to sing a song That’s so good, that is nice It’s pigeon India Oh Thank you. Thank you. the AIM ecosystem in trying to ensure that India tells better stories, tells them legally, ethically and responsibly.

Thank you so much.

Tarunima Prabhakar

Thank you so much, sir. So we would like to felicitate you if Saramam could do the honors again. He played some music. At least we can clap. So today we have with us top 50 AI thinkpreneurs. You know, these are the people who got selected from about 3 ,500 applications and they are here today from each and every corner of the world representing at the AI Impact Summit. Before I call them, to stage, to give them certificates. I would like to request Ms. Dipali Upadhyaya, our program lead, Ms. Sufeza Salim, Mr. Sumit, our admin and finance head, to please felicitate Ms. Sara Kemp, who is, you know, a huge partner. Intel has been supporting us in training, mentoring, and the selection process of these top tinkerpreneurs.

Thank you so much, ma ‘am. Thank you. give a chair. Everybody please give a chair. These are our star students and you know yeah so Shubham is here to you know felicitate them.

Shubham Tribedi

Yeah so from DAV Centenary Schools do we have? Yeah please come forward and then from Infant Jesus School Infant Jesus yeah and ML Khanna the mentors, the teachers as well as the students. Come forward please for a quick photograph. Just come forward please. Yeah Take your certificates and stand You just hold the certificates and take a picture and then why don’t you also come Come come come Come in come in Come in come in Come in come in Come in come in Come in come in Then we have Vidyashil Pagadmi, Radiant International School, Lakeford School and KVIISC. Please come forward quickly. Vidyashil Pagadmi, Radiant, Lakeford and KVIISC. Silver Oaks, Silver Oaks, JSS Matriculation. You can also come forward please.

Join them, join them please. Go ahead. them. Yes, please. The next lot can come. Yes, please. Yes. Somalwar School, Father Eggnall, Murarji Desai, please come forward. Come forward quickly. Yes, please. We can move to the next lot. Yes. Next lot, please, quickly. Murarji Desai, Silver Oak, Yes, please. Please come forward. Yes, after this, all those schools who are left can come over, the students as well as the mentors. All those schools after this who are left can come forward, the students as well as the mentors. That would be the last camera shot for the day. So whoever is left, please come forward. There is a session scheduled after this. So please, whoever is left, come forward.

The students and the mentors. Quickly settle down please The last lot is here Thank you Ma ‘am you can Just settle down Just settle down this room Thank you ma ‘am Thank you Thank you Thank you ma ‘am Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you Thank you

Related ResourcesKnowledge base sources related to the discussion topics (36)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Tarunima Prabhakar served as the host/moderator of the AI Impact Summit opening session.”

The knowledge base lists Tarunima Prabhakar as the event moderator/host, confirming her role in the opening session [S1].

Confirmedhigh

“India’s mental‑health system has a severe psychiatrist shortage of roughly one psychiatrist per 100 000 people.”

The source explicitly states the ratio of psychiatrists to the population is one per 100 000, matching the claim [S10].

Confirmedmedium

“The Delta AI platform is an AI‑driven system that delivers therapy techniques for more than 100 mental‑health disorders.”

The knowledge base describes the platform as AI-driven and ready to cater to up to more than 100 disorders, confirming the claim [S10].

External Sources (104)
S1
AI Innovation in India — -Shubham Tribedi- Role: Event coordinator for certificate distribution
S2
The reality of science fiction: Behind the scenes of race and technology — ‘Every desireis an endand every endis a desirethenthe end of the worldis a desire of the worldwhat type of end do you de…
S3
AI Innovation in India — -Deepak Bagla- Role: Mission Director; Title: Atal Innovation Mission We have a very special address by Mr. Ojasthi Bab…
S4
AI Innovation in India — -Sarah Kemp- Role: Vice President International Government Affairs; Title: Intel Kemp’s emphasis on India’s “superpower…
S5
AI Innovation in India — -Tarunima Prabhakar- Role: Event moderator/host
S6
Driving Social Good with AI_ Evaluation and Open Source at Scale — -Tarunima Prabhakar: Works at TATL (organization that has been looking at online harms for over six years), focuses on b…
S7
AI Innovation in India — – Adhiraj Chauhan- Shreenidhi Baliga- Jaiwardhan Tyagi- Deepak Bagla- Sarah Kemp – Adhiraj Chauhan- Shreenidhi Baliga- …
S8
AI Innovation in India — – Adhiraj Chauhan- Shreenidhi Baliga- Jaiwardhan Tyagi
S9
AI Innovation in India — -Deepak Bagla- Role: Mission Director; Title: Atal Innovation Mission A few years later, a mint brand launched in India…
S10
https://dig.watch/event/india-ai-impact-summit-2026/ai-innovation-in-india — A few years later, a mint brand launched in India using this cover version as their audio campaign. And as I checked las…
S11
AI Innovation in India — -Deepak Bagla- Role: Mission Director; Title: Atal Innovation Mission The celebration of the Atal Innovation Mission’s …
S12
From India to the Global South_ Advancing Social Impact with AI — -Deepak Bagla- Mission Director for Atal Innovation Mission
S13
https://dig.watch/event/india-ai-impact-summit-2026/ai-innovation-in-india — And despite a lot of efforts because of a large population, the ratio of psychiatrists to people is one psychiatrist for…
S14
AI Innovation in India — Hello, my name is Adhiraj Chauhan. And I’m a high school student of 11th grade. And I’m the founder and CEO of Delta AI …
S15
India to boost innovation and digital services — India haslaunchedseveral transformative initiatives to strengthen its digital infrastructure and innovation ecosystem, f…
S16
Promoting age-friendly digital technologies collaboration and innovation for an inclusive information society — Wei Su:Okay, I will share the screen. So, let me, oh, sorry, wait a minute. Sorry, wait a minute, I need to close up. Ok…
S17
Barriers to Inclusion: Strategies for People with disability | IGF 2023 — Difficulties faced by people with disabilities that are not easily visible Involving people with disabilities in proble…
S18
https://dig.watch/event/india-ai-impact-summit-2026/leaders-plenary-global-vision-for-ai-impact-and-governance-morning-session-part-1 — Thank you for inviting me to this important summit. It is an honor to be here in India at this pivotal moment for global…
S19
AI-assisted diagnostics expand across Europe — AI-powered diagnostics arebeing implemented across Europe, with France, Portugal, Hungary, Sweden and the Netherlands le…
S20
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And we also want to make sure that AI can be safe and secure for the use by every citizen in India and beyond. So it’s a…
S21
The Role of Government and Innovators in Citizen-Centric AI — The discussion maintained an optimistic and collaborative tone throughout, with speakers expressing enthusiasm about AI’…
S22
AI music faces legal challenges — AI-generated musicfacesstrong opposition from musicians and major record labels over concerns about copyright infringeme…
S23
Open Forum #26 High-level review of AI governance from Inter-governmental P — 4. Youth: Should be involved in policy-making and allowed to innovate while addressing potential risks. Leydon Shantsek…
S24
YouthLead: Inclusive digital future for all — Clara Brown:Thank you so much. So, I’d first like to start by saying that my motivation to become a voice for youth in t…
S25
Youth-Driven Tech: Empowering Next-Gen Innovators | IGF 2023 WS #417 — Atanas Pahizire:Please, let’s begin with Adenis. Thank you, Denise. The youth is ready to participate. The youth is read…
S26
Empowering Inclusive and Sustainable Trade in Asia-Pacific: Perspectives on the WTO E-commerce Moratorium — The Goods and Services Tax (GST) has played a significant role in normalizing tax components such as sales tax and value…
S27
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — -Policy and Regulatory Framework Challenges: Speakers identified the need for better coordination between central and st…
S28
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S29
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — The discussion concluded with optimistic assessments of AI’s potential to strengthen participatory governance. Both spea…
S30
For the record: AI, creativity, and the future of music — ## Streaming Platform Realities Michael Nash: All right, brother. Good evening. And you know it’s been a long day at a …
S31
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — The conversation reinforced that effective digital regulation requires balanced leadership anchored in trust, inclusion,…
S32
De-briefing and Next steps — There’s a process in place for issuing certificates from the workshop.
S33
Scaling Innovation Building a Robust AI Startup Ecosystem — And before I conclude, I sincerely appreciate my organizing team and every colleague who worked diligently behind the sc…
S34
AI for Good Impact Initiative — Desire to ensure a positive future for younger generations through technology Equipping young people with the resources…
S35
AI for Good Impact Awards — All speakers emphasize the importance of engaging youth in technology development and innovation, recognizing their pote…
S36
Prosperity Through Data Infrastructure — The importance of innovation in driving technological advancements is highlighted. The potential of man-machine symbiosi…
S37
Youth-Driven Tech: Empowering Next-Gen Innovators | IGF 2023 WS #417 — In summary, the discussion underscores the importance of empowering youth and fostering innovation. This includes digita…
S38
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — “We’re getting MDSAP approval on almost 19 of them, FDA approval for nine, and we’re looking for partnership to build be…
S39
AI Innovation in India — Ojaswi Babbar outlined a comprehensive investment framework for AI startups, emphasising domain depth, proprietary data …
S40
How AI Drives Innovation and Economic Growth — And I’m seeing two patterns. One is about trust in technology, and the second part is about the reality of the policy wo…
S41
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — “Our program, AlphaFold, that solved the 50‑year grand challenge of protein folding, I think is just the first example o…
S42
How AI Drives Innovation and Economic Growth — Kremer argues that while there are forces that may widen gaps, AI has significant potential to narrow development dispar…
S43
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — We’re now at a pivotal moment. Artificial intelligence is rapidly transitioning from a technological frontier to a core …
S44
Bridging the AI innovation gap — The speaker stressed that all stakeholders—government, industry, academia, and civil society—have important roles in sha…
S45
AI/Gen AI for the Global Goals — Henry Kipponen: Well, what I see is like, I look at it from the perspective of innovation. It’s something that’s like…
S46
How nonprofits are using AI-based innovations to scale their impact — It’s, I think it’s somewhere between the pilot and the rollout. So we, around 15 teachers I think have had 57 or 75, 57 …
S47
Understanding the language of modern AI — Always ask for sources and verify them independently through reliable databases or official websites. Request that the A…
S48
Beyond answers: How AI is redefining web communication for International Geneva — Imagine a user asking an AI chatbot:’How should my country regulate AI?’The chatbot might provide a confident, neatly ph…
S49
Part 2.5: AI reinforcement learning vs human governance — AI agents operate differently from humans, particularly as they do not haveinherent natural boundaries, such as common s…
S50
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — Addressing these challenges, the need for a rights-based national policy was stressed. This policy would ensure the prot…
S51
Advancing Scientific AI with Safety Ethics and Responsibility — Thank you, Shyam. I think this is a very important question. And it’s also a topic that I’m really passionate about as w…
S52
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Courtney Radsch:Yeah, I think one of the problems, to definitely agree with Milton on the risk-based approach, you just …
S53
INTERNET — The period from 2016 to 2025 was not simply one of rapid technological change; it was the era in which the global digita…
S54
Open Forum: A Primer on AI — Another concern is the potential impact of AI on the job market. As AI capabilities advance, certain professions may bec…
S55
Press Conference: Closing the AI Access Gap — Adopting AI and other emerging technologies can also provide advantages to developing countries. By embracing these tech…
S56
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Key to this trajectory are collaborative and inclusive policy governance, culturally attuned ethical frameworks, and bro…
S57
Open Forum #67 Open-source AI as a Catalyst for Africa’s Digital Economy — All speakers emphasize the importance of using AI to solve real-world problems in sectors like healthcare, agriculture, …
S58
MedTech and AI Innovations in Public Health Systems — The discussion maintained a collaborative and constructive tone throughout, with participants openly sharing both succes…
S59
The Foundation of AI Democratizing Compute Data Infrastructure — Given the volume of funds available, I would focus a lot more on capability development of people to be able, their abil…
S60
Do we really need frontier AI for everyday work? — Default to smaller, specialised modelsfor routine tasks, especially where privacy, latency, and cost matter. Use fronti…
S61
AI Innovation in India — So thank you for making us proud. Very well done. And your presentation? remarkable. Thank you. Thank you very much. Th…
S62
YouthLead: Inclusive digital future for all — Clara Brown:Thank you so much. So, I’d first like to start by saying that my motivation to become a voice for youth in t…
S63
AI for Good Impact Awards — ## Robotics for Good Youth Challenge Bilel Jamoussi: Thank you, LJ, and good afternoon. It’s really my honor to present…
S64
Open Forum #26 High-level review of AI governance from Inter-governmental P — Leydon Shantseko: The first one is not to be used in most of the conversation, especially when it comes to governance. …
S65
Youth-Driven Tech: Empowering Next-Gen Innovators | IGF 2023 WS #417 — Atanas Pahizire:Please, let’s begin with Adenis. Thank you, Denise. The youth is ready to participate. The youth is read…
S66
From India to the Global South_ Advancing Social Impact with AI — Atal Innovation Mission’s grassroots approach has produced 1.1 crore young entrepreneurs through 10,000 tinkering labs, …
S67
Science AI &amp; Innovation_ India–Japan Collaboration Showcase — Himanshu from Atal Innovation Mission highlighted the significant disparity between different regions of India in terms …
S68
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S69
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S70
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — The discussion concluded with optimistic assessments of AI’s potential to strengthen participatory governance. Both spea…
S71
A licensed AI music platform emerges from UMG and Udio — UMG and Udio havestruck an industry-first dealto license AI music, settle litigation, and launch a 2026 platform that bl…
S72
For the record: AI, creativity, and the future of music — ## Streaming Platform Realities ## Universal Music Group’s Strategic Approach Michael Nash: All right, brother. Good e…
S73
Meta launches AudioCraft: a suite of generative AI models for audio and music creation — Meta recently launched a new AI tool that transforms the landscape of audio and music production.AudioCraft comprises a …
S74
WAIGF Opening Ceremony &amp; Keynote — The session concluded with an announcement for a group photograph and lunch break.
S75
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — The conversation reinforced that effective digital regulation requires balanced leadership anchored in trust, inclusion,…
S76
De-briefing and Next steps — There’s a process in place for issuing certificates from the workshop.
S77
Scaling Innovation Building a Robust AI Startup Ecosystem — And before I conclude, I sincerely appreciate my organizing team and every colleague who worked diligently behind the sc…
S78
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S80
https://dig.watch/event/india-ai-impact-summit-2026/ai-innovation-in-india — Thank you so much, ma ‘am. That was really inspiring. I would also like to mention that, you know, we have top 50 studen…
S81
Closing remarks — Doreen Bogdan Martin: Thank you. Thank you, LJ. And you see I’m wearing the t-shirt because it’s Friday. It’s Friday eve…
S82
World in Numbers: Jobs and Tasks / DAVOS 2025 — The overall tone was informative and analytical, with the speakers presenting data and insights in a professional manner…
S83
WS #283 Breaking the Internet Monopoly through Interoperability — The tone was primarily informative and analytical, with the speaker presenting research and concepts in an academic styl…
S84
Cooperation in a Divided World / DAVOS 2025 — The tone was primarily informative and analytical, with speakers presenting data and insights in a professional manner. …
S85
Session — The tone was primarily analytical and forward-looking, with the speaker presenting evidence-based predictions while ackn…
S86
WS #198 Advancing IoT Security, Quantum Encryption &amp; RPKI — The tone was primarily informative and forward-looking, with speakers providing technical explanations as well as policy…
S87
The Global Power Shift India’s Rise in AI &amp; Semiconductors — The discussion maintained an optimistic and forward-looking tone throughout, with speakers expressing confidence in Indi…
S88
AI 2.0 The Future of Learning in India — The tone was consistently optimistic and forward-looking throughout the conversation. Speakers maintained an enthusiasti…
S89
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The discussion maintained an optimistic and collaborative tone throughout, characterized by constructive problem-solving…
S90
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S91
Advancing Scientific AI with Safety Ethics and Responsibility — The discussion maintained a collaborative and constructive tone throughout, characterized by technical expertise and pol…
S92
AI and Data Driving India’s Energy Transformation for Climate Solutions — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s insights rather than…
S93
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — The tone was consistently optimistic and pragmatic throughout. The panelists shared concrete examples and measurable res…
S94
Open Mic &amp; Closing Ceremony — The overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, wi…
S95
WSIS Prizes 2025 Winner’s Ceremony — The tone throughout the ceremony was consistently celebratory, formal, and appreciative. It maintained a positive and co…
S96
Launch / Award Event #159 Book Launch Netmundial+10 Statement in the 6 UN Languages — The tone was consistently celebratory, appreciative, and forward-looking throughout the session. Participants expressed …
S97
Abstract — The use of artificial intelligence (AI) presents healthcare workers with a whole set of opportunities which motivate a r…
S98
AI chatbot shows promise in mental health assistance — Dartmouth College researchershave trialledan AI chatbot, Therabot, designed to assist with mental health care. In a grou…
S99
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — But I think today it’s affecting our tasks. It’s affecting tasks of efficiency. You know, we’ve already started doing pr…
S100
Acknowledgements — The team is grateful to the many ITU colleagues and interns that provided support to this report.
S101
The Dawn of Artificial General Intelligence? / DAVOS 2025 — Jonathan Ross highlighted the significance of open source models like DeepSeek, predicting that they would be consequent…
S102
Opening of the session — Expressed appreciation for the work done by Madam Chair and her team
S103
Presentation of outcomes to the plenary — The speaker notes that while governmental corruption frequently captures the collective gaze, private sector corruption …
S104
Any other business /Adoption of the report/ Closure of the session — Expressed thanks to Madam Chair In conclusion, the delegate reiterated his gratitude, acknowledging the extensive labou…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Adhiraj Chauhan
2 arguments193 words per minute285 words88 seconds
Argument 1
Mental‑health AI platform for underserved patients (Adhiraj Chauhan)
EXPLANATION
Adhiraj describes his startup as an AI‑driven mental‑health support platform that addresses the shortage of psychiatrists in India by offering therapy techniques for over 100 disorders. The service is currently provided to psychiatric clinics and is transitioning to a direct‑to‑consumer model.
EVIDENCE
He explains that the mental-health crisis is severe, with only one psychiatrist for 100,000 people, and that his platform uses AI to deliver therapy techniques for more than 100 disorders, serving clients such as Dr. Mora Psychiatric Clinic and engaging with the Delhi Psychiatrist Association, having reached about 20 clients and now moving to a B2C model [14-22].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources note the psychiatrist-to-population ratio of 1:100,000 and describe an AI-driven mental-health support platform covering over 100 disorders, matching the claim [S1][S10].
MAJOR DISCUSSION POINT
Youth‑led AI solution for mental‑health access
AGREED WITH
Tarunima Prabhakar, Shreenidhi Baliga, Jaiwardhan Tyagi, Deepak Bagla, Sarah Kemp
DISAGREED WITH
Jaiwardhan Tyagi
Argument 2
Acknowledgement of Atal Innovation Mission, Intel and school support that enabled the MVP (Adhiraj Chauhan)
EXPLANATION
Adhiraj thanks the Atal Innovation Mission, Intel, his school, and the Ministry of Electronics and IT for providing mentorship, resources, and funding that allowed him to develop his first minimum viable product. He attributes his progress to these ecosystem partners.
EVIDENCE
He expresses gratitude to the Atal Innovation Mission’s Tinkering Lab where he built his MVP, to Intel for mentorship, to his school for ongoing support, and to the Ministry of Electronics and IT for funding [9-13][23-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Atal Innovation Mission’s 2.0 programme and its funding are documented, confirming ecosystem support for MVP development [S15][S1].
MAJOR DISCUSSION POINT
Recognition of ecosystem support
S
Shreenidhi Baliga
2 arguments122 words per minute223 words109 seconds
Argument 1
Sign‑language‑to‑speech/Braille glove for the deaf‑blind (Shreenidhi Baliga)
EXPLANATION
Shreenidhi presents a glove that translates sign language into speech and Braille, aiming to improve communication for the deaf‑blind community. The device was trained on thousands of images using deep‑learning techniques.
EVIDENCE
She explains that the project, named after the game Charades, is a glove converting sign language to speech and speech to Braille, developed with machine-learning models trained on thousands of images [30-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion of barriers and inclusion strategies for people with disabilities highlights the relevance of assistive AI solutions like a sign-language-to-speech glove [S17].
MAJOR DISCUSSION POINT
Assistive AI technology for accessibility
AGREED WITH
Adhiraj Chauhan, Jaiwardhan Tyagi, Deepak Bagla, Sarah Kemp, Gaurav Dagaonkar
Argument 2
Gratitude for Tinkerpreneur Challenge, mentorship programmes and summit organizers (Shreenidhi Baliga)
EXPLANATION
She thanks the various mentorship and training programs, including the Tinkerpreneur Challenge, Atal Innovation Mission, Intel, and the summit organizers, for enabling her project’s development. She highlights the confidence these supports provided to young innovators.
EVIDENCE
She acknowledges the boot camps from the Tinkerpreneur Challenge, mentorship from Atal Innovation Mission and Intel, and the summit organizers for their role in building confidence and enabling the project [28-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Tinkerpreneur Challenge and Atal Innovation Mission mentorship are referenced as key enablers for youth innovators [S15][S1].
MAJOR DISCUSSION POINT
Appreciation of capacity‑building ecosystem
AGREED WITH
Tarunima Prabhakar, Adhiraj Chauhan, Jaiwardhan Tyagi, Deepak Bagla, Sarah Kemp
J
Jaiwardhan Tyagi
1 argument134 words per minute723 words322 seconds
Argument 1
Multimodal AI system for radiology and dermatology diagnostics (Jaiwardhan Tyagi)
EXPLANATION
Jaiwardhan outlines a multimodal AI framework that integrates vision‑language models, retrieval‑augmented generation, and other modalities to interpret radiology and dermatology data in real time. He emphasizes the need for systems that can handle distribution shifts and provide comprehensive clinical reasoning rather than single‑task classifiers.
EVIDENCE
He compares early radiology AI to a metal detector and current AI to a full airport security system, noting challenges with distribution shifts and hallucinations, then describes two pipelines-one for radiology using dyno, CLIP, and retrieval-augmented VLMs, and another for dermatology-highlighting ongoing development and a segmentation model that extracts tissue proportions from 3D MRIs [38-45][51-55].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-assisted diagnostics in radiology and dermatology are reported in Europe, providing context for the need of multimodal systems [S19]; internal description of radiology pipeline also appears in the source [S1].
MAJOR DISCUSSION POINT
Advanced multimodal AI for medical imaging
AGREED WITH
Tarunima Prabhakar, Adhiraj Chauhan, Shreenidhi Baliga, Deepak Bagla, Sarah Kemp
DISAGREED WITH
Adhiraj Chauhan
D
Deepak Bagla
1 argument137 words per minute722 words314 seconds
Argument 1
Mission director stresses AI as India’s “delta multiplier” and the need for future talent (Deepak Bagla)
EXPLANATION
Deepak emphasizes that AI will be the key driver (“delta multiplier”) for India’s socioeconomic transformation, projecting a population increase to 1.6 billion by 2060 and highlighting the country’s ability to work in unstructured environments. He calls for nurturing future technologists to harness this potential.
EVIDENCE
He discusses the future of work, mental-health challenges, AI’s rapid impact, India’s growing population, and the nation’s strengths in adapting without a playbook, concluding that AI will empower 1.6 billion people and make India a major global economy [65-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Mission Director Deepak Bagla’s remarks on AI being a ‘delta multiplier’ for India’s socioeconomic growth are recorded [S1][S12].
MAJOR DISCUSSION POINT
AI as a catalyst for India’s development
AGREED WITH
Tarunima Prabhakar, Adhiraj Chauhan, Shreenidhi Baliga, Jaiwardhan Tyagi, Sarah Kemp
S
Sarah Kemp
1 argument137 words per minute402 words175 seconds
Argument 1
Intel VP highlights partnership, responsibility of future technologists and India’s people‑centric AI vision (Sarah Kemp)
EXPLANATION
Sarah thanks the participants, stresses Intel’s partnership with the summit, and calls on the next generation of technologists to use AI responsibly for societal good. She underscores India’s people‑centric approach to AI and the importance of ethical stewardship.
EVIDENCE
She thanks the audience, praises the future technologists, mentions Intel’s role as a partner, highlights India’s superpower being its people, and calls for responsible use of AI to build a better society, referencing the summit’s historic status in the Global South [112-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Intel’s partnership and emphasis on responsible, people-centric AI are noted in the summit context [S20][S1].
MAJOR DISCUSSION POINT
Partnership and ethical responsibility in AI
AGREED WITH
Tarunima Prabhakar, Adhiraj Chauhan, Shreenidhi Baliga, Jaiwardhan Tyagi, Deepak Bagla
O
Ojaswi Babbar
1 argument175 words per minute579 words197 seconds
Argument 1
Rapid validation, controlled pilots, revenue‑model optimisation and capital access as core evaluation criteria (Ojaswi Babbar)
EXPLANATION
Ojaswi outlines a framework for evaluating AI startups that includes rapid validation through stress‑testing, controlled pilot programs with corporate partners, ensuring robust revenue models, and facilitating access to capital and strategic investors for scaling.
EVIDENCE
He describes rapid validation, stress-testing feasibility with corporate pilots, a philosophy of ‘fail fast, fail forward’, revenue-model optimisation, inference-cost reduction, and linking startups with investors such as Atal Innovation Mission and Intel for scaling [155-172].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidelines stressing rapid validation, pilot programmes and revenue-model optimisation for AI startups are outlined in the source material [S1][S10].
MAJOR DISCUSSION POINT
Evaluation and scaling framework for AI ventures
AGREED WITH
Adhiraj Chauhan, Shreenidhi Baliga, Sarah Kemp, Deepak Bagla
DISAGREED WITH
Jaiwardhan Tyagi
G
Gaurav Dagaonkar
1 argument130 words per minute974 words448 seconds
Argument 1
Hooper AI’s marketplace uses multimodal AI to tag, match and legally license music for brands and creators (Gaurav Dagaonkar)
EXPLANATION
Gaurav presents Hooper as India’s first native music‑licensing platform that uses multimodal AI to generate tags (mood, genre) and match songs with brand needs, enabling legal licensing and royalty distribution to artists. He cites extensive adoption by influencers, brands, and media personalities.
EVIDENCE
He explains that Hooper processes raw audio with multimodal AI to create tags, uses LLMs to fingerprint brands, matches songs to brand contexts, and has onboarded over 300,000 influencers and 220 brands, with examples such as Yash Raj Films, Universal Music, and public figures like Ranveer Brar and the Chief Minister of Maharashtra using the platform [219-240].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-generated music faces legal challenges, underscoring the importance of a licensed marketplace like Hooper; the platform description is also present in the source [S22][S1].
MAJOR DISCUSSION POINT
AI‑enabled music licensing ecosystem
AGREED WITH
Adhiraj Chauhan, Shreenidhi Baliga, Jaiwardhan Tyagi, Deepak Bagla, Sarah Kemp
T
Tarunima Prabhakar
1 argument79 words per minute669 words504 seconds
Argument 1
Host coordinates ceremony, calls for felicitation and unveiling of the Tinkerpreneur compendium (Tarunima Prabhakar)
EXPLANATION
Tarunima manages the event flow, inviting dignitaries to unveil the Tinkerpreneur compendium, calling the young innovators to the stage, and prompting applause and video presentation to celebrate the summit’s milestones.
EVIDENCE
She announces the unveiling of the compendium, requests the dignitaries to unveil it, calls the three champions and Hufeza Salim to the stage, and cues a video and applause for the 10-year Atal Innovation Mission celebration [136-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tarunima’s role as event host and the unveiling of the Tinkerpreneur compendium are mentioned in the source material [S1].
MAJOR DISCUSSION POINT
Event orchestration and recognition of innovators
AGREED WITH
Adhiraj Chauhan, Shreenidhi Baliga, Jaiwardhan Tyagi, Deepak Bagla, Sarah Kemp
S
Shubham Tribedi
1 argument69 words per minute323 words278 seconds
Argument 1
Facilitates certificate distribution and group photograph for awardees (Shubham Tribedi)
EXPLANATION
Shubham directs students and mentors from various schools to come forward for a group photograph and to receive their certificates, ensuring a smooth conclusion to the award ceremony.
EVIDENCE
He calls out multiple schools-DAV Centenary, Infant Jesus, Vidyashil, Radiant, Lakeford, KVIISC, Silver Oaks, JSS Matriculation, among others-asking them to come forward for photographs and certificate collection, managing the final logistics of the ceremony [255-280].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shubham’s coordination of certificate distribution and group photo is documented in the source [S1].
MAJOR DISCUSSION POINT
Logistical coordination of award distribution
Agreements
Agreement Points
All speakers stress the critical role of ecosystem partners (Atal Innovation Mission, Intel, mentorship programmes, government) in enabling youth‑led AI innovation.
Speakers: Adhiraj Chauhan, Shreenidhi Baliga, Sarah Kemp, Deepak Bagla, Ojaswi Babbar
Mental‑health AI platform for underserved patients (Adhiraj Chauhan) Gratitude for Tinkerpreneur Challenge, mentorship programmes and summit organizers (Shreenidhi Baliga) Intel VP highlights partnership, responsibility of future technologists and India’s people‑centric AI vision (Sarah Kemp) Mission director stresses AI as India’s “delta multiplier” and the need for future talent (Deepak Bagla) Rapid validation, controlled pilots, revenue‑model optimisation and capital access as core evaluation criteria (Ojaswi Babbar)
Adhiraj thanks the Atal Innovation Mission’s Tinkering Lab, Intel and his school for enabling his MVP [9-13][23-24]; Shreenidhi acknowledges the Tinkerpreneur Challenge, Atal Innovation Mission and Intel mentorship [33-34]; Sarah thanks Intel and the Indian government for partnership and support [112-121]; Deepak, as mission director, repeatedly references the Atal Innovation Mission and its partners while praising their role in nurturing talent [65-71]; Ojaswi notes that Atal Innovation Mission and Intel are key strategic investors for scaling startups [171-172].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the AI for Good Impact Initiative, which highlights ecosystem support for youth entrepreneurship and aligns with multi-stakeholder governance emphasized at IGF 2023 and the call for inclusive AI policy frameworks [S34][S35][S37][S44][S56].
AI is presented as a tool to address major societal challenges – health, accessibility, education, and economic growth.
Speakers: Adhiraj Chauhan, Shreenidhi Baliga, Jaiwardhan Tyagi, Deepak Bagla, Sarah Kemp, Gaurav Dagaonkar
Mental‑health AI platform for underserved patients (Adhiraj Chauhan) Sign‑language‑to‑speech/Braille glove for the deaf‑blind (Shreenidhi Baliga) Multimodal AI system for radiology and dermatology diagnostics (Jaiwardhan Tyagi) Mission director stresses AI as India’s “delta multiplier” and the need for future talent (Deepak Bagla) Intel VP highlights partnership, responsibility of future technologists and India’s people‑centric AI vision (Sarah Kemp) Hooper AI’s marketplace uses multimodal AI to tag, match and legally license music for brands and creators (Gaurav Dagaonkar)
Adhiraj describes an AI-driven mental-health platform to bridge psychiatrist shortages [14-22]; Shreenidhi presents a glove converting sign language to speech and Braille for the deaf-blind [30-33]; Jaiwardhan outlines multimodal AI pipelines for radiology and dermatology to improve diagnostics [38-45][51-55]; Deepak frames AI as the “delta multiplier” that will empower 1.6 billion Indians and drive socioeconomic transformation [65-78]; Sarah emphasizes a people-centric AI vision for societal good [118-124]; Gaurav explains Hooper’s AI-powered music-licensing marketplace that legally connects creators and brands [219-240].
POLICY CONTEXT (KNOWLEDGE BASE)
The framing follows the Sustainable Development Goals-aligned AI for Good agenda and recent policy analyses that position AI as a lever for health, education and inclusive growth, as discussed in the AI for Good Impact Initiative and the Prosperity Through Data Infrastructure report [S34][S36][S40][S41][S45][S50].
All speakers highlight the importance of nurturing young innovators and future technologists as drivers of AI progress.
Speakers: Tarunima Prabhakar, Adhiraj Chauhan, Shreenidhi Baliga, Jaiwardhan Tyagi, Deepak Bagla, Sarah Kemp
Host coordinates ceremony, calls for felicitation and unveiling of the Tinkerpreneur compendium (Tarunima Prabhakar) Mental‑health AI platform for underserved patients (Adhiraj Chauhan) Gratitude for Tinkerpreneur Challenge, mentorship programmes and summit organizers (Shreenidhi Baliga) Multimodal AI system for radiology and dermatology diagnostics (Jaiwardhan Tyagi) Mission director stresses AI as India’s “delta multiplier” and the need for future talent (Deepak Bagla) Intel VP highlights partnership, responsibility of future technologists and India’s people‑centric AI vision (Sarah Kemp)
Tarunima introduces three young innovation champions and repeatedly calls them to the stage [1-4][26][36-38]; Adhiraj identifies himself as an 11th-grade student founder [5-7]; Shreenidhi introduces herself as a student from Bangalore [27-28]; Jaiwardhan describes himself as an engineer, student and reader [37]; Deepak calls for nurturing future technologists to harness AI’s potential [65-78]; Sarah directly addresses “future technologists” and urges responsible use of AI [112-124].
POLICY CONTEXT (KNOWLEDGE BASE)
Youth-centric AI policies such as the AI for Good Impact Awards and the IGF Youth-Driven Tech session stress capacity-building, digital literacy and access to financing, reinforcing this agreement [S34][S35][S37][S45].
Similar Viewpoints
Both stress that AI solutions must be rigorously tested and validated before deployment – Jaiwardhan warns about distribution‑shift failures and hallucinations in medical AI [42-45], while Ojaswi outlines a rapid validation and stress‑testing framework for AI startups [155-162].
Speakers: Jaiwardhan Tyagi, Ojaswi Babbar
Multimodal AI system for radiology and dermatology diagnostics (Jaiwardhan Tyagi) Rapid validation, controlled pilots, revenue‑model optimisation and capital access as core evaluation criteria (Ojaswi Babbar)
Both emphasize responsible, ethical deployment of AI‑driven services – Gaurav highlights legal licensing and ethical royalty distribution for music [219-240], while Sarah calls for responsible AI use and stewardship by future technologists [118-124].
Speakers: Gaurav Dagaonkar, Sarah Kemp
Hooper AI’s marketplace uses multimodal AI to tag, match and legally license music for brands and creators (Gaurav Dagaonkar) Intel VP highlights partnership, responsibility of future technologists and India’s people‑centric AI vision (Sarah Kemp)
Unexpected Consensus
AI as a catalyst for large‑scale economic growth and global leadership
Speakers: Deepak Bagla, Gaurav Dagaonkar
Mission director stresses AI as India’s “delta multiplier” and the need for future talent (Deepak Bagla) Hooper AI’s marketplace uses multimodal AI to tag, match and legally license music for brands and creators (Gaurav Dagaonkar)
While Deepak discusses AI at a macro, national-level as the “delta multiplier” that will empower 1.6 billion people and transform India’s economy [65-78], Gaurav focuses on a niche music-licensing marketplace but frames his AI platform as a key driver of India’s digital economy and cultural export, linking AI to economic growth and global competitiveness [219-240]. Their convergence on AI as a strategic economic lever, despite operating in vastly different sectors, is unexpected.
POLICY CONTEXT (KNOWLEDGE BASE)
Economic forecasts from the China AI Plus Economy Initiative and analyses of AI’s contribution to global GDP underscore AI’s role as a growth engine, echoing calls for strategic investment in AI to secure leadership [S36][S40][S42][S43][S55].
Overall Assessment

The speakers show strong consensus on three fronts: (1) the necessity of a supportive ecosystem (Atal Innovation Mission, Intel, mentorship) for youth‑led AI projects; (2) AI’s potential to address critical societal challenges such as health, accessibility, and economic inclusion; and (3) the pivotal role of young innovators and future technologists in driving this transformation. These shared positions reinforce the importance of policies that strengthen innovation ecosystems, invest in capacity building for youth, and promote responsible AI deployment.

High consensus – the alignment across founders, mission leadership, corporate partners, and the host underscores a unified vision that AI, when backed by robust ecosystem support and guided by ethical responsibility, can be a major engine for social and economic development in India.

Differences
Different Viewpoints
Breadth of AI solutions versus focused domain-specific applications
Speakers: Jaiwardhan Tyagi, Adhiraj Chauhan
Multimodal AI system for radiology and dermatology diagnostics (Jaiwardhan Tyagi) Mental‑health AI platform for underserved patients (Adhiraj Chauhan)
Jaiwardhan argues that aiming for a single model that can understand all aspects of human health is an “obsession with scaling” and stresses the need for multimodal, reasoning-based systems that handle distribution shifts [38-46]. In contrast, Adhiraj describes a focused AI-driven mental-health support platform targeting over 100 disorders, emphasizing a domain-specific solution rather than a universal model [14-18]. The two speakers therefore disagree on whether AI impact should be pursued through broad, multimodal systems or through narrow, problem-specific platforms.
Approach to bringing AI innovations to market – rapid validation and pilots versus direct product development
Speakers: Ojaswi Babbar, Jaiwardhan Tyagi
Rapid validation, controlled pilots, revenue‑model optimisation and capital access as core evaluation criteria (Ojaswi Babbar) Multimodal AI system for radiology and dermatology diagnostics (Jaiwardhan Tyagi)
Ojaswi outlines a framework that prioritises rapid validation, stress-testing feasibility with corporate pilots, and revenue-model optimisation before scaling [155-162]. Jaiwardhan, however, focuses on building complex multimodal pipelines and mentions a demo that could not be shown due to time constraints, without referencing a structured validation or pilot process [51-58][60-62]. This reflects a disagreement on the sequence and methodology for moving AI solutions from prototype to deployment.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between pilot validation and productisation is highlighted by industry leaders who stress regulatory approval and stress-testing with corporate partners before scaling, as described in Apollo Hospitals’ keynote and AI Innovation in India’s investment framework [S38][S39][S46][S59].
Perceived reliability of AI outputs – concern over hallucinations versus confidence in AI‑driven services
Speakers: Jaiwardhan Tyagi, Adhiraj Chauhan
Multimodal AI system for radiology and dermatology diagnostics (Jaiwardhan Tyagi) Mental‑health AI platform for underserved patients (Adhiraj Chauhan)
Jaiwardhan highlights that current vision-language models “hallucinate a lot” and perform poorly under distribution shift, stressing the need for robust reasoning systems [44-45]. Adhiraj, on the other hand, presents his mental-health platform as a ready-to-use solution serving about 20 clients without mentioning such reliability concerns, implying confidence in its AI performance [16-22]. The differing views reveal a disagreement on the trustworthiness of AI applications in critical domains.
POLICY CONTEXT (KNOWLEDGE BASE)
Concerns about hallucinations are documented in nonprofit AI deployments and AI safety guidelines that call for source verification and uncertainty signalling, underscoring the reliability debate [S46][S47][S48][S49].
Unexpected Differences
Legal‑licensing focus versus health‑centric AI priorities
Speakers: Gaurav Dagaonkar, Adhiraj Chauhan, Jaiwardhan Tyagi
Hooper AI’s marketplace uses multimodal AI to tag, match and legally license music (Gaurav Dagaonkar) Mental‑health AI platform for underserved patients (Adhiraj Chauhan) Multimodal AI system for radiology and dermatology diagnostics (Jaiwardhan Tyagi)
While most speakers discuss AI applications aimed at health, education, or capacity building, Gaurav introduces a music‑licensing platform that tackles intellectual‑property and commercial licensing issues—an area not addressed by the other speakers. This divergence in sector focus was not anticipated given the health‑and‑education‑centric context of the summit.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions on rights-based health data governance and risk-based AI regulation illustrate the trade-off between legal-licensing frameworks and health-focused AI deployment strategies [S50][S52][S58].
Overall Assessment

The discussion shows broad consensus on the transformative potential of AI for India’s development, but reveals substantive disagreements on the scope of AI solutions (broad multimodal systems vs. narrow domain‑specific tools), the pathway to market (rapid validation and pilots vs. direct product rollout), and the reliability of AI outputs (concern over hallucinations vs. confidence in deployed services). An unexpected sectoral clash appears with the introduction of a music‑licensing AI platform.

Moderate to high disagreement on strategic approaches, which could affect coordination among stakeholders. While shared goals may foster collaboration, divergent views on scaling, validation, and sector focus suggest the need for clearer frameworks to align efforts and manage expectations.

Partial Agreements
All speakers share the overarching goal of leveraging AI to drive social and economic development in India, but they diverge on the sectors and pathways to achieve this—mental‑health services, accessibility for the deaf‑blind, medical imaging, music licensing, ethical partnership, and talent development are each presented as distinct priority areas. The consensus on AI’s importance is clear, yet the strategies differ markedly.
Speakers: Adhiraj Chauhan, Shreenidhi Baliga, Jaiwardhan Tyagi, Gaurav Dagaonkar, Sarah Kemp, Deepak Bagla
Mental‑health AI platform for underserved patients (Adhiraj Chauhan) Sign‑language‑to‑speech/Braille glove for the deaf‑blind (Shreenidhi Baliga) Multimodal AI system for radiology and dermatology diagnostics (Jaiwardhan Tyagi) Hooper AI’s marketplace uses multimodal AI to tag, match and legally license music (Gaurav Dagaonkar) Intel VP highlights partnership, responsibility of future technologists and India’s people‑centric AI vision (Sarah Kemp) Mission director stresses AI as India’s “delta multiplier” and the need for future talent (Deepak Bagla)
Both recognize the need to support innovators, but Ojaswi emphasizes systematic evaluation and scaling mechanisms, whereas Tarunima focuses on ceremonial recognition and showcasing achievements. They agree on the importance of nurturing innovators but differ on the means—structured validation versus public celebration.
Speakers: Ojaswi Babbar, Tarunima Prabhakar
Rapid validation, controlled pilots, revenue‑model optimisation and capital access as core evaluation criteria (Ojaswi Babbar) Host coordinates ceremony, calls for felicitation and unveiling of the Tinkerpreneur compendium (Tarunima Prabhakar)
Takeaways
Key takeaways
Youth‑led AI projects are tackling critical societal problems: mental‑health support (Delta AI Revolution), sign‑language to speech/Braille for the deaf‑blind (Charades glove), and multimodal diagnostic AI for radiology and dermatology (Neuropex). A robust support ecosystem—Atal Innovation Mission, Intel, schools, and mentorship programmes—has enabled these students to develop MVPs, secure funding, and gain market traction. The mission director emphasized AI as India’s “delta multiplier” for economic growth and highlighted the need for a new generation of technologists to reskill the workforce. Intel’s VP reinforced the partnership model and stressed the responsibility of future technologists to develop people‑centric, ethical AI. A structured evaluation and scaling framework for AI startups was presented, focusing on rapid validation, controlled pilots, revenue‑model optimisation, and access to capital. Hooper AI showcased an AI‑driven music‑licensing marketplace that legally connects creators with brands, illustrating AI’s role in the creative economy. The summit celebrated and recognized the top 50 AI tinkerpreneurs, unveiling the Tinkerpreneur compendium and distributing certificates.
Resolutions and action items
Continue mentorship, funding, and technical support for the highlighted student ventures through Atal Innovation Mission and Intel. Apply the presented evaluation framework (rapid validation, pilot testing, revenue model checks, capital linkage) to future AI startup selections. Unveil and distribute the Tinkerpreneur compendium to all participants. Facilitate further B2C rollout for Delta AI Revolution’s mental‑health platform and expand partnerships with psychiatric clinics. Scale the Charades glove project by integrating more sign‑language datasets and pursuing commercial partnerships for deaf‑blind assistance. Advance Neuropex’s multimodal AI pipelines toward real‑time clinical reporting and broader dermatology use‑cases. Promote Hooper AI’s licensing platform to additional brands and creators, and encourage development of derivative works using the licensed content.
Unresolved issues
How to reliably handle distribution‑shift challenges in AI diagnostic models, as highlighted by Jaiwardhan Tyagi. Ensuring sustainable revenue streams and long‑term scalability for the student‑led mental‑health and sign‑language platforms. Widespread awareness and compliance with music‑licensing requirements among creators and startups. Establishing clear metrics for measuring social impact of the AI solutions presented. Further clarification on the integration of AI ethics guidelines within the mentorship and funding processes.
Suggested compromises
Adopt Ojaswi Babbar’s “fail fast but fail forward” approach: allow rapid experimentation while ensuring structured learning and iteration before large‑scale deployment. Balance rapid validation with controlled pilot programmes to mitigate risk while still accelerating time‑to‑market. Combine AI scaling ambitions with India’s strength in unstructured problem‑solving, leveraging limited resources efficiently.
Thought Provoking Comments
The problem isn’t the architecture itself but it’s the thinking that a single model can understand every dynamic of human health – we need a system that reasons across modalities, references previous conclusions, and produces understandable reports rather than just a classifier.
Highlights a fundamental limitation in current AI approaches for healthcare, challenging the prevailing focus on ever larger single models and introducing the concept of multimodal reasoning frameworks.
Shifted the discussion from showcasing individual projects to a deeper technical debate about AI architecture, prompting listeners to consider robustness (distribution shift) and leading to later mentions of Neuropex’s dual pipelines and the need for integrated solutions.
Speaker: Jaiwardhan Tyagi
Mental health is the biggest challenge ahead; AI will be the delta multiplier for India, empowering 1.6 billion people by 2060 and requiring a generation that can re‑skill and adapt over the next ten years.
Frames AI not just as a tool but as a societal catalyst, linking demographic projections with economic growth and emphasizing the urgency of reskilling the youth.
Provided a macro‑level turning point, moving the conversation from individual innovations to national strategy, and set the stage for subsequent remarks about responsibility (Sarah Kemp) and evaluation frameworks (Ojaswi Babbar).
Speaker: Deepak Bagla
With great talent comes great responsibility… you have the ability to make the society you want, using AI for good.
Reinforces the ethical dimension of AI development, reminding innovators that technical success must be paired with societal impact, echoing earlier concerns about mental health and distribution shift.
Re‑energized the audience with a motivational tone, bridging technical discussions with a call for responsible innovation, and prepared listeners for the upcoming evaluation framework presented by Ojaswi Babbar.
Speaker: Sarah Kemp
Our evaluation framework focuses on rapid validation, fail‑fast‑forward, corporate‑partner pilots, solid revenue models, and strategic capital – these are the pillars that turn AI ideas into scalable Indian successes.
Introduces a concrete, systematic approach to moving from prototype to market, challenging the notion that any AI idea is automatically viable and emphasizing disciplined scaling.
Provided a practical roadmap that linked earlier visionary statements (Deepak, Sarah) to actionable steps, influencing how later speakers (e.g., Gaurav Dagaonkar) positioned their business models.
Speaker: Ojaswi Babbar
India has no native platform for music licensing; Hooper uses multimodal AI to tag mood, match songs to brands, and ensure creators get paid – solving an opaque, ethically fraught space.
Brings attention to a non‑technical yet critical domain (intellectual property) and demonstrates how AI can create transparent, ethical marketplaces, expanding the discussion beyond health and education.
Introduced a new industry focus (music) and highlighted AI’s role in ethical compliance, reinforcing the earlier theme of responsible AI and showing a tangible application of the evaluation criteria discussed by Ojaswi.
Speaker: Gaurav Dagaonkar
Our glove converts sign language to speech and speech to Braille, helping the deaf‑blind community by leveraging deep‑learning models trained on thousands of images.
Shows a direct, inclusive application of AI for accessibility, emphasizing intent over age and illustrating how technology can bridge communication gaps for marginalized groups.
Added a human‑centered example early in the session, setting a tone of social impact that resonated with later comments on mental health (Adhiraj) and responsible AI (Sarah).
Speaker: Shreenidhi Baliga
We realized that among Indian youth mental health is an epidemic; with only one psychiatrist per 100,000 people, our AI‑driven platform offers therapy techniques for over 100 disorders, moving from B2B to B2C.
Identifies a stark systemic gap and proposes a scalable AI solution, highlighting both the scale of the problem and a strategic shift in business model to reach end‑users directly.
Reinforced the theme of AI addressing critical societal shortages, complementing Deepak’s macro view and prompting audience recognition of AI’s potential in public health.
Speaker: Adhiraj Chauhan
Overall Assessment

The discussion evolved from showcasing individual student projects to a layered conversation about AI’s role in society. Early personal innovations (Adhiraj, Shreenidhi) established a human‑impact baseline, which was then expanded by Jaiwardhan’s technical critique of current AI models, prompting a shift toward systemic thinking. Deepak’s macro‑level framing of AI as India’s future economic multiplier set a strategic context, which Sarah Kemp reinforced with an ethical call to responsibility. Ojaswi Babbar provided a concrete evaluation framework that linked visionary ideas to practical scaling, and Gaurav Dagaonkar illustrated this by applying AI to an ethically complex domain—music licensing. Collectively, these pivotal comments redirected the dialogue from isolated achievements to a cohesive narrative about responsible, scalable, and socially meaningful AI innovation in India.

Follow-up Questions
How will AI-driven radiology and medical vision-language models perform under distribution shift?
Understanding model robustness to real‑world data variations is crucial for safe clinical deployment.
Speaker: Jaiwardhan Tyagi
What frameworks can enable multimodal reasoning across modalities to improve clinical reporting?
A system that integrates video, audio, and textual cues could reduce hallucinations and provide more reliable diagnostics.
Speaker: Jaiwardhan Tyagi
How can we systematically evaluate whether AI innovations are worth backing versus being hype?
A clear evaluation framework is needed to allocate resources efficiently and avoid investing in non‑viable projects.
Speaker: Ojaswi Babbar
What are effective methods for rapid validation, stress‑testing, and revenue‑model optimization for AI startups?
Fast, forward‑failing validation and solid business models are essential for scaling AI solutions in the Indian ecosystem.
Speaker: Ojaswi Babbar
What licensing requirements exist for using existing music (e.g., Bollywood songs) in commercial projects, and how can startups ensure compliance?
Clarifying legal obligations prevents infringement and ensures creators receive due royalties.
Speaker: Gaurav Dagaonkar
How can AI be used to create legally compliant remixes or derivative works while ensuring artists are compensated?
Developing tools that respect copyright while enabling creative reuse would open new opportunities for innovators.
Speaker: Gaurav Dagaonkar

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.