AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence

AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Naveen GV explaining that Benchmark Gen Street is moving its 30-year-old EHS SaaS platform to an “AI-first” architecture to improve safety and predictive intelligence [1-6][9]. He introduced “Jenny AI”, an agent that can analyse a photographed hazard, auto-populate the observation form and ask follow-up questions when context is missing, eliminating manual data entry [25-36][40-43]. The same agent can accept spoken descriptions in Hindi, transcribe them and structure the information for validation, demonstrating multilingual support for non-technical users [46-58][59]. For incident investigation, the platform offers a “5Y AI” that iteratively asks why-questions to uncover root causes and then suggests corrective actions using a hierarchy-of-controls model [61-70][80-90]. A separate RISC-AI engine aggregates records from observations and incidents to surface patterns, risk heat-maps and predictive insights across the organization [124-133]. After the demo, Naveen highlighted that the next year will focus on autonomous agents that perform the heavy-lifting previously done by humans, aiming for a fully agentic platform [135-139].


The subsequent panel shifted to a broader perspective, arguing that creativity, cognition and culture remain uniquely human strengths that AI cannot originate, and that fear of AI should be countered by emphasizing originality [144-162]. Participants noted that rapid AI advances are shrinking the shelf-life of hard skills, making “applied intelligence” and the ability to create solutions more important than mere coding knowledge [263-274]. They also stressed that AI can democratise learning by providing low-cost access to knowledge in rural and underserved areas, but effective use requires motivation, reliable data and ethical guidance [384-386][411-418]. Ashish Gupta warned that while AI tools are powerful, education systems must teach responsible and ethical usage, and that hands-on creation, not just consumption, builds confidence in learners [301-311][322-327]. The panel agreed that human-centered skills such as imagination, design thinking and cultural awareness will differentiate people from machines and should be nurtured through inclusive curricula [185-200][263-274].


The session concluded with a product demonstration of ENCODE, an AI-driven learning platform that maps individual growth, offers mentorship and adaptive courses to foster creativity and cognition [446-454]. Overall, the discussion underscored that while AI can automate data capture and analysis in safety and education, its greatest impact will be as a digital co-worker that amplifies human creativity, cognition and cultural insight rather than replacing them [92][140-143].


Keypoints


Major discussion points


AI-first transformation of Benchmark Gen Street’s safety platform – The company is converting its 30-year-old SaaS EHS system into an “AI-first” solution, already having 75 use-cases and moving toward “agentifying” them for autonomous action [5-10]. The new Observation Reporting feature lets workers scan a QR code or upload a photo, which the “Jenny AI” agent analyses, extracts the hazard details and auto-fills the reporting form [22-33]. Similar agents support voice-based reporting in local languages [45-55], 5-Why root-cause analysis [61-70], ergonomics risk detection from video [100-107], regulatory compliance parsing [113-122], and enterprise-wide trend detection via RISC-AI, which aggregates all records to surface precursors and heat-maps of risk [124-133].


AI as a digital co-worker that augments-not replaces-human expertise – The AI agents handle routine data capture (e.g., filling forms from images [36-44] or Hindi speech [58-66]), but they still request clarification when context is missing [40-42] and hand over the structured data for human validation [35-38]. In incident investigations the AI guides users through the 5Y analysis, generating possible causes and corrective actions, while the final decisions remain with supervisors [61-70][92-93]. RISC-AI further provides predictive insights, helping safety teams prioritize interventions [124-133].


Human creativity, cognition and culture as the differentiators in an AI-driven world – Several speakers argue that AI can automate tasks but cannot replace lived experience, intuition and design thinking. The “bumblebee” analogy stresses that creativity remains the uniquely human advantage [148-162]. Panelists highlight that creativity, cognition and culture are the pillars of human capital and will continue to distinguish humans from machines [185-199]. Concerns about job displacement and fear of AI are noted, with the view that quality data and human originality are essential for trustworthy AI outcomes [206-214].


Education, democratization of AI skills and the need for ethical, inclusive learning ecosystems – Participants stress that the shelf-life of hard skills is shrinking, urging a shift from “learning” to “making” and applying knowledge [263-274]. Government-run digital-skilling portals, new education policies, and AI-enabled tools are cited as ways to upskill the massive Indian population, especially in rural areas [301-330][384-389]. The ENCODE platform exemplifies a next-gen, AI-powered learning network that maps individual interests, provides mentorship, and fosters creativity, aiming to make education accessible and future-ready [440-452].


Overall purpose / goal of the discussion


The session aimed to showcase how an AI-first approach can revolutionize workplace safety (through autonomous agents, predictive risk analytics, and integrated compliance) while simultaneously exploring the broader societal impact of AI on jobs, skills, and education. The speakers sought to convince the audience that AI should be positioned as a collaborative partner that amplifies human creativity, cognition and culture, and they announced partnerships and product demos that will extend these capabilities into the education sector.


Overall tone and its evolution


Opening (0:00-20:00): Highly technical and promotional, emphasizing product capabilities, efficiency gains, and the vision of autonomous AI agents.


Mid-section (20:00-45:00): Shifts to a reflective and cautionary tone, with speakers expressing concerns about AI-driven job loss, the need to preserve human originality, and the fear many feel.


Later segment (45:00-80:00): Becomes optimistic and collaborative, focusing on education, democratization, ethical use, and the promise of AI-enabled creativity.


Closing (80:00-end): Returns to an enthusiastic, celebratory tone, highlighting partnerships, product launches, and a collective commitment to “keep humanity intact” while leveraging AI.


Overall, the conversation moves from a product-centric showcase to a broader philosophical dialogue about humanity’s role in an AI-augmented future, ending on a hopeful note about collaborative innovation.


Speakers

Speakers (from the provided list)


Naveen GV – Representative of Benchmark Gen Street, discussing AI-first transformation for environment, health & safety platforms.


Speaker 1 – Product demo presenter who walks through AI use-cases for observation reporting and risk analysis.


Speaker 2 – Keynote speaker on design, creativity and the future of work in the age of AI.


Speaker 3 – Session moderator who introduced Dr. Shweta Chaudhary and the panel.


Shweta Chaudhary – Founder & Director of CodeEDU; host of the session on creativity, cognition & culture.


Piyush Nangru – Founder of a tech school; speaker on “creativity, cognition and culture” and their role in Viksit Bharat. [S1][S18]


Speaker 4 – Panelist discussing AI adoption, fear & opportunities; provides perspectives on government-industry interaction.


Ashish Gupta – Professor at South Asian University; speaker on the “orange economy” and AI in education. [S18]


Speaker 5 – Voice presenting the ENCODE creative learning network platform.


Speaker 6 – Representative of an academic-industry partnership, emphasizing design-oriented coding education.


Audience – Various audience members who asked questions (e.g., Saurav).


Additional speakers (not in the provided list)


Chandan – Colleague of Naveen GV, mentioned as the next presenter but does not have a spoken segment.


Garima – Colleague who was to invite the panelists; appears as a moderator/organiser.


Magma Sree – Introduced herself briefly; role not specified.


Ajay Rivalia – Referred to as “Ajay Rivalia sir,” a partner/guest invited for a group photo.


Viplav – Mentioned as “Viplav sir,” a partner/guest invited for a group photo.


Nandaji – Mentioned as “Nandaji,” a partner/guest invited for a group photo.


Mansi – Referred to as “Mansi,” a partner/guest invited for a group photo.


Vijay – Referred to as “Vijay sir,” a partner/guest invited for a group photo.


Unkar – Referred to as “mentor Unkar sir,” invited to join the session.


Full session reportComprehensive analysis and detailed insights

AI-first safety platform


The session opened with Naveen GV outlining Benchmark Gen Street’s three-decade history of digitising environment, health, safety and sustainability for roughly 450 global subscribers and eight million daily users. He explained that the company is now re-architecting its long-standing SaaS platform into an “AI-first” solution, having already identified about 75 distinct AI use cases and planning to “agentify” many of them so that autonomous agents can perform the heavy-lifting previously done by humans [3-6][9-10][5-6][9].


Speaker 1 then demonstrated the first of these agents – the observation-reporting tool referred to in the transcript as both “Jenny AI” and “Genie AI.” Workers can scan a QR code or upload a photo of a perceived hazard; the agent analyses the image, extracts relevant safety details and auto-populates the observation form, eliminating the need for manual data entry [22-33][34-36]. When the image lacks context (e.g., the exact working height), the agent prompts the user with follow-up questions to capture missing information before finalising the record [40-44]. The same agent also accepts spoken descriptions in Hindi, transcribes the audio, extracts the hazard information and pre-fills the observation form for the user to review [45-58][59-60].


Building on the reporting capability, the platform offers a “5Y AI” module for incident investigation. After a hazard is logged, the AI iteratively asks “why” questions to uncover root causes, presenting possible causal branches that supervisors can select and refine. It then suggests corrective and preventive actions aligned with the hierarchy-of-controls framework (elimination, substitution, engineering, administrative) and produces a draft action plan that the human reviewer can approve [61-70][80-90][91-92].


Further extensions were showcased: an ergonomics analyser (“Ergo AI”) that processes short video clips of manual handling tasks to flag musculoskeletal risk points that would normally require a certified ergonomist and can summarise its findings into a ready-made report with a single click [100-107]; a regulatory-compliance agent that ingests lengthy legal documents, decomposes them into individual requirements (≈35 clauses) and feeds these into a compliance calendar for operational tracking [113-122]; and the RISC-AI engine that aggregates all observation and incident records, identifies patterns and precursors, and visualises risk heat-maps that combine incident volume with weighted severity scores, thereby delivering predictive intelligence for proactive risk mitigation [124-133].


Across these demonstrations the speakers repeatedly stressed that the AI agents act as digital co-workers: they accelerate routine data capture and analysis but still require human validation, especially when broader context is missing or when final decisions about corrective actions must be made. The need for high-quality input data and continuous human-in-the-loop oversight was highlighted as a prerequisite for reliable outcomes [39-43][209-216][5-9].


Naveen closed the demo by thanking Sundar for his support and reiterated the commitment to deliver a fully agentic platform within the next year, inviting attendees to visit the Benchmark Gen Street booth for personalised AI implementation discussions [135-139].


Humanity, creativity and education in the age of AI


After the product demonstration, the floor opened for a broader discussion on the societal impact of AI.


Speaker 2 used a bumblebee metaphor to argue that creativity, cognition and culture are uniquely human pillars that AI cannot originate, warning that traditional resumes may become obsolete by 2030 as AI takes over routine tasks [148-162][158-163]. Piyush Nangru reinforced this view, describing the three pillars of human capital and asserting that coding is now “table-stakes” while the ability to apply knowledge creatively will differentiate future workers [185-199][190-191]. Shweta Chaudhary echoed the sentiment, insisting that preserving originality and humanness is essential even as AI becomes pervasive [173-176][201-204].


Speaker 4 (government-sector participant) stressed that AI outputs are only as reliable as the data fed into them and that continuous human-in-the-loop oversight is essential for responsible deployment [5-9][209-216].


Ashish Gupta highlighted the need for ethical and responsible AI use in curricula, citing the New Education Policy and government digital-skilling portals as mechanisms for scaling AI literacy across the country [301-330][332-337].


The panel examined implications for education. Participants noted that the shelf-life of hard-skill knowledge is now measured in a few years rather than decades, prompting a shift from pure knowledge acquisition to “applied intelligence” – the capacity to create, apply and solve problems [263-274]. The ENCODE platform was presented as an AI-driven learning network that maps individual interests, offers mentorship, delivers adaptive, creativity-focused learning pathways, and operates under the tagline “Create, connect, collaborate” [440-452][446-454].


Additional partnerships were announced: collaborations with MEC Connect, Nimbus, and the Next Gen Academy, accompanied by a group photo and a brief product demo [384-389].


The discussion also touched on broader societal concerns: bridging the urban-rural divide through AI-enabled services, addressing tax-remittance and employment-matching challenges for diaspora workers, and building confidence through creation-focused education [379-383][258-260].


Unresolved challenges / open questions


The audience asked whether a timeline exists for AI surpassing human intelligence. Shweta Chaudhary repeated the question, and Speaker 3 (the moderator) answered that no predictive model exists, underscoring uncertainty [285-290][261]. Divergent views emerged on the speed of AI’s impact: Piyush Nangru suggested immediate democratisation of learning, while Shweta Chaudhary maintained that human intelligence will remain superior [278-279][173-176].


Other open issues included: (i) a clearer timeline and policy framework for AI’s potential supremacy over human cognition; (ii) scalable strategies to train India’s 1.4 billion citizens-including those without internet access-in responsible AI use; (iii) robust mechanisms to ensure data quality and continuous feedback loops for safety predictions; (iv) concrete curriculum reforms that embed AI, creativity and ethics from primary school onward; and (v) practical solutions for diaspora tax-remittance and employment matching [261][285-290][301-330][384-389][258-260].


Action items


– Deliver a fully agentic safety platform within the next year.


– Continue ethical AI training for educators and leverage government AI-readiness programs.


– Formalise MOUs with ENCODE, MEC Connect, Nimbus and Next Gen Academy to embed AI-enabled learning pathways.


– Maintain the invitation to visit the Benchmark Gen Street booth for personalised discussions.


In summary, the session demonstrated how an AI-first transformation can automate and enrich workplace safety workflows while sparking a wider debate about the future of work, education and human identity. The consensus was that AI should be positioned as an augmenting co-worker that amplifies human creativity, cognition and cultural insight rather than replacing them. Realising this vision will require coordinated investment in digital skills, ethical governance, high-quality data and inclusive infrastructure, ensuring that the benefits of AI are broadly shared and that the uniquely human attributes of imagination and design continue to drive progress [5-9][148-162][263-274][301-330][124-133][209-216].


Session transcriptComplete transcript of the session
Naveen GV

out a long, lengthy form of information for that to be processed much later by another human in the loop, per se, to really looking at how do we get an experiential learning and an experiential engagement where the intent is to keep everybody safe and our workplaces safe as well, and then obviously have AI be an enabler in how we get this into the system as well as processed, and that giving us the right signals for predictive intelligence. So that’s a paradigm shift that we are looking at, obviously, in today’s times, and the evolution has taken us to this stage right now. So we as Benchmark Gen Street have been around the business of digitizing environment health, safety, and transforming workplaces for the last 30 years, and we work across the world with close to 450 global…

Subscribers. and about 8 million user base using the system day in and day out for various aspects of compliance, assurance, environment, health and safety to sustainability and ESG management to obviously looking at supplier engagement and quality and security as well. So it’s a time -tested active product and the challenge for us over the last three years has been really to transform a SaaS -based system that we have today into making it AI -first. So that’s been our motto over the last three years in how we convert all of our intelligence, learning experiences into giving our customers AI -first philosophy and methodology of engaging with the platform. I think I covered some of this aspect. And in the pipe, I think we have.

I think we have already 75 different use cases in AI that we have available. but I think our next gen is about agentifying a lot of that to deliver the right value for engagement. So with that, I think I’ll invite my colleague Chandan to take a shot at helping us walk through the use cases and really talking through the value proposition of how AI is here to change the way we do stuff. So, Chandan.

Speaker 1

Thanks, Naveen. So now we’ll walk you through, I would say, a story of someone who is working at a work site and how they really look at the different risks and hazards at the work site. So let’s say I am just walking into my work site, and this is something that I see the moment I kind of walk into a construction site. I look at this scene. and I know something is wrong here. I am not entirely sure what is wrong here. I am not sure what kind of safety rules that they are violating. But I know, I get a sense that something is wrong here. Traditionally, in a very traditional sense, what Naveen spoke about, you know, earlier what used to happen is that I am supposed to kind of, you know, go find a form, fill it either, you know, manually on a paper or, you know, find a portal, look at a form and fill it, understand what is the type of risk, the type of hazard that I want to report on.

But now let’s look at what the transformative way of looking into these hazards and risks. So I will go online. What you are looking at is one of the programs that we have. We call it observation reporting. And this is something that is used for engaging people in reporting. Reporting any kind of health and safety concerns that they have at their workplace. What a worker can or anybody for that matter they can really do here is look at let me just I think I am not connected to internet so just give me a second to connect it back but essentially what I can do here as somebody who is using this kind of platform and AI technology I can very easily scan a QR code on my phone or scan a direct photo of something that I am seeing and then look at sharing it with the agent that we have we call it Jenny AI and the moment we kind of share it with the Jenny AI the agent can really process and look at all the different health and safety related hazards that we have and kind of fill that form on my behalf.

So let me let us take an example we will look at the same photo that I was showing you earlier and let us see what that overall process will look like. So assuming I have already captured the photo here. I will locate that photo which is in my phone or my computer. And let’s see what really happens here. What you will see here is the Genie AI, which is the agent, AI agent. It will pass through the photo which is uploaded. And that’s what is happening right now. It is analyzing the image input. It is reading through the intent of the input which has been provided. And it has filled the entire form on my behalf.

I did not have to go and tell it or describe the hazard. It says that there were a couple of workers or two workers who are working at site. And they do not seem to have any fall protection equipment which is there. Now, of course, we do understand that the AI is just looking at the photo. It does not have broader context at this point in time, which is where it will also ask you, do certain things that it does not care about. So right now it is not very certain that while the people are working at height, it’s not certain of what is the height that they are working at. And it will ask you some follow -up question that you can kind of really provide.

But this is where it will help you, you know, update most part of your form, if not everything. Now let’s, you know, assume that I don’t have a photo. I am there in the site and I just want to kind of go and report something that I saw. And I do not, I am not very fluent in, let’s say, English or the corporate language that we use. So I want to do it in my own, let’s say, language. So I am going to use an example. I am going to speak or describe what I saw in Hindi. And let’s see how the agent will respond to that. So I am going to say, I am going to speak in Hindi.

And I am going to say, I am going to speak in Hindi. And I am going to say, I am going to speak in Hindi. And I am going to say, I am going to speak in Hindi. So what I did just now, I spoke in Hindi and I described that I saw, you know, a bunch of people working it. And now let’s see what’s happening here. So the AI assistant, it is analyzing the voice, what I spoke in Hindi, and it is kind of, you know, trying to put that into the form, into an structured data for me to again go back, validate, and then submit it. So what, you know, how it really helps is, let’s say I do not have the safety inspector’s lens or competences, but I still want to contribute and I want to report things.

This AI can really help you, you know, put things in. So you can get the perspective in the right structure and get the data into the system. now let’s say you know I have reported this I saw two people they were doing something which was not really safe and it was reported into the system what’s next? The next step is for us to really understand why they were doing it and that’s where the incident investigation comes into picture it is a process for the industry to look into what really happened and then understand the root cause behind it and that’s something that we do here using the other AI that I want to talk about which we call as 5Y AI analysis and 5Y is nothing but a way of looking into what exactly happened and why it really happened and we keep asking question you know as to why it happened so in this example you know two people were working at height they were not using any safety equipment then the question would be why they were doing that you know were they not trained about it or were they not really, you know, given that safety equipment, right?

So, this is how you look at all the different reasons which really contributed to that particular incident. Now, in this case, typically when we do it in a very traditional manner, what it needs is, you know, multiple people who have years of experience, they collaborate. These are the, you know, team of cross -collaboration, you know, experience. And then they look at all these reasons. But in the absence of that kind of experience, this is where, again, AI can be used as a digital co -worker. So, in this case, the AI is helping me kind of articulate what really happened here, and then it will support the entire process of conducting a YY analysis. So, the moment I click on suggest, it kind of opens up a separate form, takes into account everything that has been reported here from a context standpoint, and the moment I click on…

generate Y statement, it will give me different branches, different options, which I, as a practitioner, I as a supervisor, I can really pick at and then conduct this analysis. And this is the process that I will kind of, you know, go and repeat until I reach to that final Y as well. So this is again, like I mentioned, the idea here is that even if someone does not have that kind of experience, it can use the LLM, the large language model, which are, you know, trained on the latest datasets. And that’s something that can, you can really use to, I would say, substitute the experience part of it. Now, let’s say we have investigated this.

And now we need to also figure out what do we do to really, I would say, repeat the recurrence of similar incident, right? Two people were standing on a drum, they were doing something they were not supposed to do. We investigated it, we understood that, you know, maybe they were not trained, maybe they were not given the right kind of equipment. So, now we need to look at what should be done to really, I would say, correct that. Typically, when we talk about corrective preventive actions, there are different controls that we talk about, right? Not all controls are same. There are certain controls which are more structured, more powerful. We call them, you know, we identify them as hierarchy of controls.

So, in this example, when someone is working at height, the first type of control that someone would look at is the elimination. Is there a way we can eliminate this risk altogether? If not, can we substitute it with a less hazardous risk, right? Instead of having two people climb the height, can we do it through, you know, maybe something else? Maybe we bring in a forklift or maybe we bring in a Caesar lift and we do that activity accordingly. And then we talk about the engineering. Engineering control and then the other administrative control that we have. So many times what happens is, you know, when people are thinking about these controls, they don’t really have a very structured thinking in identifying these controls.

That’s where we have this option or this AI agent which looks into the details and then across the hierarchy of controls which should be applied, we can look at generating those different type of controls. And that’s what you are seeing here. It is giving me a very good first draft on what are the things that I should be doing for preventing the recurrence of similar observations, similar incidents here. So just to kind of recap, this is how AI can really help people in not just understanding the context of what they are seeing at the site from a risk perspective, but also look at understanding the root causes behind it and also come up with the corrective preventive.

actions without, I would say. you know, of course, it’s not again, a replacement of human, but it is a digital co worker that you can have in your pocket and which can really guide you through the entire process that we have. Now, let’s look at the other example here. I think we spoke about fall from height and the risk that you saw there is very, very evident, right? You saw two people who are standing at, you know, maybe three meter height, and there is a risk of them falling and you know, sustaining a fracture. But there are other risks when you work in industry, which are not so visible. And one of those risks is ergonomics risk, risk, right?

It depends on, you know, what kind of activity that you’re performing, right? What type of movement, the body movement, the manual material handling that you’re doing, and it creates a strain on your shoulder on your backbone, and so on, so forth. Typically when industry you know run these programs they need people who are actually trained on these guidelines some of these are called Reba Neosh guidelines and that’s where you need someone who is certified ergonomist to really look and identify those hazards. If you are a remote site if you do not have a certified or trained ergonomist this is where the AI can be really helpful and powerful. All you need to do is take a video clip of that particular activity which is being done and then you run it through this AI agent and it can really help you identify all those risk points that you have.

So in this case what you will notice here in this video is a person who is standing next to this conveyor and his job here is to pick these boxes manually and place it back on on this conveyor. So it might look you know a very very simple activity but if you keep doing this for one hour, two hours, six hours, eight hours a day there are a lot of risks that you are exposed to from your ergonomic standpoint. So if I just kind of run this video, you will notice that the Ergo AI agent is looking at all those pressure points and trying to kind of identify those risks which you are not able to identify unless you have gone through that rigorous training of being an ergonomist.

Once it is done, you can also look at converting it out and generate a quick report here. So the moment I click on summarize, it takes all those learnings, those analysis, and it is creating a ready -made output for me to kind of go and share with the relevant people. So this was an example of, I would say, ergonomics. Now, let’s also look at another example. And I think Naveen spoke about how we are kind of transitioning from having AI as a standalone functionality or feature to now looking at the AI. So if I go to my software, I’m going to go to my software, concept where the AI functionality works in the entire ecosystem and focus on autonomous action as well.

So it’s not just about the inside, but it is also about taking action on behalf of human, of course, within the certain defined guardrails that we have. The example that I’m showing you here is of a legal compliance. Typically, when you are in an industry, you need to go through multiple type of regulatory compliances that you need to report on. I’m taking one such example of a regulatory requirement from one of the steel industry and feeding this information to this particular AI. What it will do is look at consuming this entire information and it will then deconstruct it into different requirements that we have. And this is where you will see that the agent here has deconstructed it into almost 35.

Individual requirements that the industry is supposed to comply with. At a click of button, we can also take all of these requirements into a tool called compliance calendar, which is where these requirements can really be operationalized. Right. You can, of course, interact with this agent and, you know, ask specific questions or give specific, I would say, directions. Also, in this case, I’m asking you to do a quick synopsis also, as well as taking a quick way of auditing this entire activity. So this is where the single agent is kind of, you know, connected and working with multiple of the programs that you have within that defined ecosystem that we have. Now, the last piece and, you know, one of the most important piece that we wanted to share with you is all of these individuals.

Individual AI components that we saw, they tell you, they process a single record and they tell you a story about that particular record. but what if we want to understand the overall trend and the story that all of these data points together they are telling us that is where the RISC -AI comes into picture it looks at processing all of the records that you have each and every record which is logged into the system across different programs whether it is observation whether it is incident, it is kind of processed and that is where it helps you identify the patterns the trends of precursors things which can go wrong so I think again going back to the example that Naveen used of Bhopal there were of course many many precursors before that incident happened in terms of maintenance in terms of safety culture but all of them probably went unnoticed so this is where a system like RISC -AI is extremely powerful extremely helpful which really helps you see kind of trend and help you take a preventive action also The other aspect that it can also do is help you visualize the different kind of risk that you have in your organization.

So using this chart, and let me just refresh this for a second. But what you will be able to do here is use a mathematical model to assign a severity of different type of risk that you have in your organization and visualize those on a heat map. In this case, what you will see here on the x -axis, I have the volume of those different risk categories which are captured. And on the y -axis, I have the overall weightage risk score. If I just talk about some of the examples here, for example, slip and trip risk, you will see that the count, record count is on a higher end. There are almost 75 records which are tagged to this category.

But the weightage risk score is comparatively low when we look at some of the other components here. Such as fall from height, because it is taking also into account the inherent risk that you have in that particular activity. So this is how it can really generate a very powerful picture of helping you understand what are the areas that you need to kind of focus on from a prevention standpoint and also provide you a bit of predictive intelligence about what and where you should focus next, both in terms of the different part of your workplace, organizations, as well as the different kind of activities that you’re having in the system. So with that, I will now invite Naveen back on the stage to kind of, you know, Naveen, anything else that you want to add from a closing standpoint?

Naveen GV

Thank you, Sundar. I think this is great. I think I had a few friends come up to kind of comment on the demo, which I think they could relate to a lot more, I would say, from an industry standpoint. But overall, I think as benchmark Gen Street, I think our journey this year is going to be focusing around some of the stuff that Sundar showed. which is autonomous agents being able to do a lot of the stuff, a lot of the heavy lifting that is required by individuals who were earlier wanting to engage with a platform and type in all of those details which are required. So I think with that, I think hopefully we’ve been able to do some good justice in helping you understand how AI can transform a function like safety and look out for us over the next year or so in making it a completely agentic platform.

So with that, ladies and gentlemen, thanks a lot for your time. We do have some time for questions if you do, but yeah, otherwise we have a booth back in the room when we can have a lot more personalized conversation if you’re specifically interested. So thank you. Any questions, please let us know. All right. Thank you again. Bye.

Speaker 2

A bumblebee cannot fly but it still does. The thing is that when this statement was made in 1930, we understood very little about aeronomical designs. And by 1980s and 1990s, more research came up and we realized that okay, a bumblebee can truly fly because its wingspan and body weight support a variable flying measure. That is what design does. And AI only understands what we know of design today. not what we can create with it tomorrow. Creativity is today’s human advantage. AI can generate, AI cannot originate lived experiences. The context, the culture, the emotion, the meaning and the human intuition matter more than ever. Good design is not about a good drawing. Good design is about good solutioning.

Good design is not about beauty. Beauty, good design is about good solutions. And good solutioning needs good understanding of design. And hence, I make a very provocative, bold statement today that resumes are going to die by 2030. Because the skills that you may have learned today, I may have learned so far today, may become irrelevant. AI probably will be able to do everything faster, better and at a much lesser cost than us. So then which is that? Not one skill. that remains extremely, extremely important. Design and creativity. The workforce that we need 5 years from now, I’m not even making a bold statement by saying 10 years from now or 20 years from now, which can think across disciplines, which can collaborate with machines and not compete with machines, which can communicate visually, which can learn continuously, which can adapt to context, to cultures, which can build and shift fast and which can adapt without fear.

And hence, being human becomes your advantage, even in the age of AI. And how do you become more human towards solutions is by the essence of imagination, that is creativity and design. Good afternoon, I welcome you all to this session hosted by Code. And now I welcome my colleague Garima, who will invite all the panelists and we’ll continue the discussion. Thank you so much. My name was Magma Sree. Thank you.

Speaker 3

And we are proud to have Dr. Shweta Chaudhary, founder and director of CodeEDU and host of this session, a leader working at the intersection of creativity, learning design and future ready education ecosystem. Thank you all for being here. And I would like to thank Dr. Shweta Chaudhary for his time .

Shweta Chaudhary

Hello friends and thank you for being here with CODE, the Centre for Originality, Design and Expression. Why we need it? I am thankful to Umang for setting the stage that yes, the bumblebee can still fly. So what is it that in the age of AI will keep all of us the way we are, the humans? I am privileged to have with me an August panel which has been through various walks of life as a student from esteemed institutions, as workers, colleagues and administrators into institutions of high repute and public administration, as founders who have struggled and evolved built systems which have handled talent at a larger scale. So let us hear from them what it means to them that why and how the human intelligence will stay, should stay, has to stay in the age of artificial intelligence.

I would prefer to begin with Sir from Sunstone, Puneet here, Piyush here, sorry. Piyush sir, what does this word creativity, cognition and culture mean to you as a person? That’s the best way to introduce ourselves. Rest of the chat, GPT tells about us. So today he asked. He can’t tell.

Piyush Nangru

I think these are all the pillars which define any human being. Because whenever we talk about Vixit Bharat, whatever we may think of, we might have things like GDP coming our way, but at the core of it, it is the human capital. So, whether it is personally us or we talk as a nation, creativity, cognition, culture would always be the key pillars. As a founder of a tech school, I can tell you that today coding is no longer a skill. It’s table stakes now. Because how you apply that, how you solution to that, that creativity works. So, I hope this will help you lead the way forward. You can ask any question, write me an essay, create me, you know.

Give me these points. But if you are not prompting the system again, you are not challenging your cognition. You are not challenging your thinking process. And thirdly, culture. I think we have a big 5 ,000 -year -old heritage to live with. There are more than 22 languages, innumerable dialects. To be able to take that along, to be able to understand those nuances is also very important and not to be ignored in this AI -led world. Therefore, no matter where this, and it is quite evident that AI is shaking up a lot of things and will shake up a lot of things. But the power of creativity, cognition and culture is here to distinguish. what is human led and what is mission led.

Shweta Chaudhary

Perfect. So it’s not about the countries or the continents that are fighting to fight about who owns it but it’s about the human beings of those countries and continents who will be owning it. So let’s keep us intact and keep our humanness. Thank you sir for that take. Let’s have sir who comes with a background of public administration. Sir, how do you think that creativity, cognition and culture in this setup really keeps you intact or how does this value into your ecosystem?

Speaker 4

Thank you very much. First of all, I thank CODE for organizing this beautiful session and the kind of passion they displayed in insisting upon me to come here is laudable and that’s why I am here and I would also like to tell everyone that I liked what Umang said in the beginning very much. Beautifully he presented in a very brief thing. I told Umang personally that I liked what you said. so you know i’ll tell you how i mean creativity cognition and culture why it matters i was going around this stall i also came day for yesterday for something else so i was trying to meet this this floor has a lot of government ministries so i was trying to meet trying to find out if there is some officer from any ministry i didn’t find except in ministry of skill development stall i met a lot of youngsters consultants other people so i said are you worried about ai or are you happy about ai more often than not i said we are worried sir i also across you know across the spectrum i try to meet a lot of people and talk to them engage them it’s great fun and great learning actually so i found that you know there is a lot of fear and i found that you know there is a lot of fear and i found that you know there is a lot of fear about ai and people are actually not very clear about what kind of changes ai will bring And in this kind of fear and anxiety, we should not forget that our originality, our USPs as a human being, they will matter much more than today in the world of AI.

Because AI is what the data tells the tool or the bot to give us. If the quality of data is not correct, for example, today if the data is not reliable, the results will not be reliable. So one of my friends said that, sir, AI is not like you ask a vendor to give an AI solution, he gives you and he goes. Like an IT solution used to give a software, a computer laga diya, ho gaya. It’s a continuous engagement to improve the results. Because the AI bot will improve its own results, different habits, colors of skin and languages. I think that will matter much more in the future. Thank

Shweta Chaudhary

A very beautiful take as put up by sir. One thing that’s most resilient is us. amongst all the crowd or amongst all the stages that are set up. Something that differentiates is our originality. Yes, and that is to be kept intact. Coming to some very beautiful solution, I would say, or innovation to education comes from a university which itself is formed on a very innovative format of education. I would request, sir, to define his definition of creativity and cognition.

Ashish Gupta

Yeah, thank you for this opportunity. So, as an educator, when we jump into this new term called orange economy, so the new orange, how the orange would be. So we have seen oranges, but we have not seen orange economy. So the new terminology came which defines what creative, how cognitive, and the culture immersed together to define the human being. I represent South Asian University, which is the first university in the world set up by Sark Nation. where students come from all eight countries. So people come from Nepal to my university, people come from Afghanistan to my university, people come from different destinations. We represent Asia. So within Asia, are we same by cognitive thinking? Within Asia, we are same and we are sharing same culture.

So culture to what extent? The same culture. Are we different in culture? When we say creative, Indians are more creative than the neighborhood. Neighborhoods are more creative than Indians. So when I look at this international perspective to my institution, I engage with more students and I evaluate critically what you do better. So as an educator, when I say I am in the age of AI, we are already in the immersion of the AI. People have already started AI. Students are already using AI for different tasks. So when a student comes to me, sir, I have done this work, I ask, show me the problem. I first thing I ask show me the prompt so it is not the skill that you have done assignment you have done reports it is done by GPT your skill like that you said that your skill is how you define redefine your code how you build the skill to understand the code coding is not a new thing now lot of training usually happen but what remains us creative human being how you apply your creative brain that always you can’t beat technology always assist you I believe with my personal opinion technology always assist you technology bring efficiency in you technology support you but the decision -making is always lies in the human brain are we are using the technology responsibly we are using technology ethically there are so much concern as a educator I have to train to my new human resource And I believe this new orange economy will give lot of opportunity in the time to come to the people who feel if my job is displaced, I’m sure some new job will come that you have to learn, you have to survive and you have to adapt with the change with AI time.

So that’s my perspective as an educator.

Shweta Chaudhary

Yes, sir. Beautiful perspective where he tries to say that culture and cognition and creativity bind us as Asia. That’s a very beautiful definition and separates us from the Western world and they tell us that this is what keeps us together. And at the same time, this is what will keep us going will be the cognition. Thank you, sir. Coming to Satya, sir. Coming from an IIT as a student, getting into a technical field of learning, engineering, and then coming to administration. I mean, all these numbers and domains, or I would say background, sound very redundant and sound very boring. So how still creativity stands with you, creativity, cognition and culture keeps you going.

Speaker 4

Thank you so much, Swetha. Thank you very much, panelists. And I would like to thank that this question is being asked. And I just want to describe the way we are sitting here in this hall, right? After every one hour, this hall is changing with the audience, speaker and everyone. And this is the standard setup. I would like to give the answer for all the three things. This is the standard setup is being provided for all the type of stakeholders, be it global, different types. The type of creativity we are putting or hosts are putting into this, you will see after every hour, this is being changed. And the kind of cognitive inputs or… discussions are being done must be different and it will creating level of the discussion and reception of all the stakeholders and in the similar manner the kind of culture both the sides be it speaker side or audience side or host side it will be literally different so my main just to answer this question this is a very subjective and with respect to AI AI is artificial intelligent it will always be artificial and human intervention human inputs humans with inputs of human creativity cognition and this culture will always surpass the best thing I would like to agree with professor Ashish that the kind of input prompt and expertise of that particular field which always surpass and that’s why I was discussing with Upade sir that everyone is scared, children are scared I say that there is no need to be scared with them, on these things in fact, their level of artificial intelligence will be very good they will be able to give their input through their input so there is nothing to worry about

Shweta Chaudhary

so nothing to worry about friends the walls are going to remain the same we are the emotions into those walls and that is what is going to make the difference so let’s keep the emotions and the humanness intact, I am happy to say that all my panelists here strongly say that human intelligence will supersede the artificial intelligence let’s talk about the audience, anyone who feels that the artificial intelligence is going to top us and the humans are going to stay behind or all of us are on the same page let’s start with you audience, any take on this? human intelligence or artificial? what comes first? all agree to the panel yes sir please

Audience

you mean to say there will be a timeline where this human intelligence will cease to supersede it’s a time bound super time bound position basically as AI improves is there a timeline like after certain years that AI will be better than human intelligence

Shweta Chaudhary

please say your good name please so Saurav says there is timeline to it

Piyush Nangru

I think certainly there is merit to the line of argument which you are making so as AI becomes more and more intelligent our education system is continuously under stress. It is being tested. There is a stress test happening all the time. What’s happening is that the shelf life of hard skills is really diminishing. Like if I could hold on to a skill and take my whole life career with it then it came to 20 years and now it’s a matter of couple of years, three years, four years and so that the shelf life of hard skills is really shrinking. But where we need to focus is that not only making. Earlier we used to say that don’t learn but make things.

Now it’s not only about making things. You have to understand the meaning of it. You have to apply it. Rather if you can say that not only artificial intelligence but applied intelligence is where humans are going to really be there. Because I can tell my student to code. They can make a chatbot, right? But can that chatbot tell, let’s say, a farmer in MP that whether he will be able to sell it at a good price or not? So the application of it, the solutioning of it will matter. And again, the timeline, I think it’s right now not an easy answer. But that’s the direction where we can at least all go and we know that we can expand it further.

Shweta Chaudhary

Okay, thank you. Yes, maybe that’s one of the reason why we all talk about the fear. Is that a timeline or it will be the resilience that will go forward and the humans will become smarter or the artificial intelligence will become smarter, yes? So it’s a generation that all of us are going to see and go through. So let’s keep ourselves crossed for it that we will remain the smarter ones as the panel says or the generation sitting on that side, many of the young faces don’t even want to answer. Because they are waiting and trying to tell us. that every next is smarter one.

Speaker 3

Yeah, so actually, as I was mentioning you that I was interacting with a lot of youngsters and in the overwhelming feeling that I got from all of them, apart from fear, is that, you know, everybody is very unclear and unsure about how the AI will shape the world in the future. I mean, he said about timeline, which could be 10 years, 20 years, I don’t know how many years, even I don’t know. Nobody, none of us are sure about how things will actually unfold in the future with more and AI systems being more and more smart, data becoming more and more strong. And so there is a lingering fear among everybody in terms of what will be the impact as it unfolds because we actually don’t know.

And there are no mathematical models or something which can predict how things will unfold. But as I said, as an administrator, as a public policy person, for me today if somebody asks me a simple question that tell me what should be the purpose of having ai i would say that you know when i go to a village i find that you know there are a lot of people sitting without any work they don’t have any money in their pockets they all look for some kind of employment and they are not very well educated so they are very poorly educated in the village school you know a lot of absentism happens in government schools in the rural areas people don’t go or whatever you know i mean all of you know this so for me i mean the for example if i talk about ai in education so i should be able to use the ai bot or ai tool to examine a person’s background very quickly and find out what best i could skill i can give to him so that he or she can fend for himself or herself have a decent job And I don’t think of anything bigger than this because there is an army and army and army of people who have no job.

And that scares me more than the AI because in the future, if in such a big country, so many people will not have work, then this may lead to social imbalances and problems. So, for me, AI should be able to, and AI does it actually, because in a class of 50, 100 students, 40 students, AI, with the help of AI, we can find out about each and every boy and girl’s learning abilities, how much quickly they can learn, and then we can design programs for them, which a human being, a single teacher cannot do. So, anything which we do with hand, for example, plumbing or repairing a vehicle or doing any hardware kind of. Everything AI cannot do.

It has to be done by hands. Robots can do one day, maybe, but then. when will that time come again I don’t know so these are my takes as far as our country is concerned or South Asia is concerned because today you know we are having neighbours Nepal, Bangladesh, Sri Lanka, Pakistan in this neighbourhood we are sitting if any one of the neighbours as we have seen in the past is disturbed the country gets disturbed so in our own interest and there is too many people all around in South Asia so if people can get some kind of job as per their abilities with the help of AI that would be the best

Shweta Chaudhary

application so friends yes that’s an important take to understand that where the human capital will go and what will be called the human capital how does you keep it sustained yes ma

Audience

so I want to know in a developed India we youngsters from 18 to 25 years for us it is very difficult to understand it is very easy to search a lot of things on YouTube search on chat GPT but what about our parents what about our kids what about the people who are like right now in the age of 2 to 3 years and they are learning now they are depending on chat GPT or my parents are afraid of chat GPT what can happen in AI how can AI be a fraud when are we going to teach them how to use it 140 crore people when will all of them be trained how to use AI

Shweta Chaudhary

I mean I would just say how did you teach them how to use Instagram they haven’t learned yet people are still afraid so the easier it is the easier it is it’s all about intensity education has something I will talk to everyone about it education requires an intent for sure if you have an intent then maybe yesterday a teacher first your mother and then the technology is training you to become there to reach there we will take this question again with our panelists also and also discuss about human capital per se which is the human capital human capital is scaring us and when will we be able to train it and going forward how will we be able to make this human capital so sir as a professor how do you look at professors to become better with AI?

Or what’s your take on human capital of educators?

Ashish Gupta

Yeah, so this is an educator’s dilemma. To what extent we support the use of AI and the more important ethical and responsible use of AI, right? So the question came, when do we have to teach people to use AI? So as an educator, when we see, do we have to start from school? Or do we have to start from college? Or when we go to higher education? So the most important foundation comes from the school. If you look at the kids now, they are more gadget friendly. They have tablets at home. They have IPTV at home. They have mobile at home. The kid has WhatsApp too. The kid operates WhatsApp on his own. The kid has his own group of school.

Teachers put topics in it. The kid asks the meta, that I want this meta, I want an article on this topic, of 100 words, and the meta scripts it and gives it. and he copied it from there, put it in a notebook and gave it to the school. This is not a challenge, actually. People will learn. People will learn by default. People will learn by training and people will learn through pressure. High work performance environment, where you will have to learn that technology. The problem in school is, how much ability we are able to use. That is cognitive thinking. Has that student used as much brain as the teacher said, this assignment has to be done.

He copied that much and put it in the meta, or put it in the GPT. He did not make the effort that we used to make at our time, when we used to search by ourselves, open the book by ourselves, make the notes by ourselves, and he is doing all the work in GPT. So the question came, to what extent cognitive skill remains strong into the market, learning is not a challenge, learning is not a challenge, learning is not a challenge, learning is not a challenge, learning is not a challenge, government of India is also taking lot of initiative through digital skilling. Kahi saare portal create kare kiye gaye hai jahaan par government khud ye chaat ye koi bhi citizen of India un portal par jaa kar apna registration in fact I did registration yesterday only to AI readiness.

Ki main AI ready kaise ho sakta hoon. Right. I know something of AI but I may not be perfect of using the AI. So that is my ability to learn fast. Government is has launched such programs through digital skilling portal jahaan par wo free mein apne citizens ko wo training dena chaati hai. Second perspective new education policy. Government is constantly trying to make over, re -look into the perspective of policy but again the challenge comes kya hamare school AI training ke liye tayyar hai. Infrastructure AI ke liye chahiye. AI lab humko chahiye. So learning, people are willing to learn. There is no resistance to learn, but how to support. And more importantly, I always emphasize ethical and responsible use of AI.

One more example. A few days ago, people started creating their image from Ghibli. Who taught them? Ghibli images were… Millions of images were created by family, housewife, homemaker, cooking, restaurant. Everyone put their DP on their WhatsApp. Everyone put their DP. At that time, we didn’t think about privacy. Where will that photo go? We didn’t think. Instead, we thought of creative images which has been created. Now people think about GPT. A carrier creature should be created. Chat GPT will create everything which you do. Who taught? We have learned it by learning through enemies. A friend told us. YouTube told us. Chat GPT taught us. Be clear. How to create? How to create? caricature. Yes. So indeed the human human capability to adapt is an important aspect.

Audience

Hi, thank you so much for this valuable inputs like government is there, universities is there. So my question is like how we are thinking about this ruler areas. Like they say already there are many universities, there are many colleges. But still after 12 candidates, first year, second year, final year candidates is not getting that much knowledge like what is going on in AI. Now people don’t even know what is to be studied in AI. It is not clear yet. How does AI work? How AI engineering, feature engineering, this data science, data analytics fields actually work? So how does this field how does this ruler and how does this ruler and urban is fine. How will it go to ruler areas?

Like I belong to a city. Murtujapur, whose name is also unknown. So there are such children who want to learn but they don’t have money. So what is the government is planning for them? Universities which are fine, everything is best. How they can shape these underprivileged candidates economically backward class.

Shweta Chaudhary

So, yes please. That’s a very beautiful thought that comes and in a country like ours where diversity is huge we need to understand that not everyone has that take. We would also want to listen it from both our panelists sitting on both sides of me. One coming from the government and one who has developed education in a decade. Seen India growing with it. So sir, what is the systematic approach and what it has what is the process to make all of us understand that this is new but it will not stay new. When the efforts keep going continuously, we reach somewhere. So, sir, a portal like GEM, a government -y marketplace where you have been in the part of the system right from day one of inception till today, how do you really think that the people have to trust on something of a government initiative that it will reach and become everyone’s kitty?

Speaker 4

Any person who is having a GST number, they can onboard, register on this thing. And they can participate in various products and services of the government. And in that, as madam is asking, to validate and recognize it, all the vendors, if they have any services and products, they can be onboarded and as a reliable product and source services, we properly do vendor verification and after that, we onboard its catalog. And if all those products are to be purchased by the government, services are to be purchased by the government, then tender process can be done there and can be purchased through direct market place. So I can just say that, if I am correctly pointing out taking your question ki jo ye services aur products banenge agar AI entrepreneurs jo bhi hai jo unke product aur services banenge toh wo sarkar unko sahi madhyam se leh sakti if I am able to take yes there is a system that has to start a day 2016 jain shahid JEM shuru hua hoga humne nahi socha hoga ki that we will be able to reach to this level where every smallest of the manufacturer or vendor from the rural area, the urban areas and every part of the country can become a part of a sales or a buy, sale and buy purchase platform of that scale so it takes time for sure, technology integration and adoption and also making it a part of mainstream is a process and journey let’s listen it from Piyush Piyush how do you say for such a diversified scale the question here

Piyush Nangru

sso I think AI apne aap mein ek bahaat bada democratising tool hai learning ke liye hume kya chahiye hota hai? Ek self motivation chahiye hota hai aur ek koi madhyam koi medium jo hume koi padha de jo samjha de Now this part of the problem is solved by AI Right? If you have motivation and the assumption being we all know internet is everywhere now Specifically for tier 3 towns for rural areas Now anything and everything can be learned as well as anything and everything can be built Now you will see that as a trend we will see lot of solopreneurs single person small setups kyunki aap website khud buna sakte ho creatives khud buna sakte ho marketing content khud likh sakte ho everything you it is now more empowering rather so I think it is only more democratizing and more empowering for the rural India you can build things of your own you can have aspirations which earlier needed lot more things which are now possible.

Speaker 4

I would like to give, as a parent I would like to give your answer. I have two daughters. My elder daughter is in NID Ahmedabad. And in creativity, NID Ahmedabad is quite good. And my younger daughter is in 11th class. She is a player and plays in 11th class. My elder daughter, because she didn’t have much, I asked her how you are going to deal with AI. She said, I don’t have to deal. The answer she gave me I want to tell you how the children are thinking. Our curiosity will be reduced. Professor will also be finished. She said, I don’t need to be afraid. I am studying. I will learn that first. As long as this tool is made, it will be for all of us.

I have to understand how to use that tool. And for that, until I don’t read the subject well, until I don’t understand the subject well, there is no benefit. So I don’t have to fear about what is happening. And the younger daughter, because in her college, as you said, sir, because in addition to the responsive and ethical use of AI. So once I said, how do you do notes these days? So she said, in our college, a lot of my school children use chat GPT. So I said, you don’t use it? So she said, if I use it, I won’t be able to use myself. So this was the answer from her. So in addition to what sir, professor sir has mentioned, we have to educate them for ethical and responsive use of such kind of tool, not limiting to AI.

Ashish Gupta

So perfect, sir. I would like to add one answer to his question. That. Rural urban divide has always been a challenge in India, right? I remember the time of internet 2k internet launch hua it has the perceived that internet kaha tak jayega pehle shehron tak tier 1 metro then now you see the penetration of internet to aaj gaon tak internet pohucha wo infrastructure create hua government ne utna scale up kia telecom companies ko utna strong banaya ki wo us market tak pohuchka aapko aaj ek affordable rate me data de paye right aaj rural humari population hai that plays significant role in terms of economy in terms of business in terms of small business also and I am agree that education is also a fundamental right for the student who live very far areas of India AI ko bhi wo infrastructure chahiye AI ko bhi scale up karne me utna time chahiye the way internet penetrated slowly AI also in its studies in school, in course in its curriculum, in its lab it will assimilate slowly things moving we always look at where technology comes from, where technology is made and how technology diffuses technology sometimes don’t go immediately to every places, sometimes there is systematic movement of technology from one place to another place one more thing should we fear?

it is better to learn the things we should not afraid of whatever challenges us we should adapt to it better the version is the things which challenge you, it is better to look at it, adapt it and find out the new solution that how we can bypass how we can surpass and how we can head to head head to head competition with the technology right Maybe there is one question by Saurabh That with time technology will become so strong That it will surprise human beings Maybe it could be When you as a human being Feed the LLM When you as a human being You feed the LLM What could be the possible situation of a cricket shot Then the Chetji PT will tell you You have to play this shot Which is unique Because you have to feed the LLM first How LLM will know The large language model Has to understand the human What human think first Then the LLM work in the background For the answer to be the correct one The question has to be the correct one And the question has to come from the audience And the people How do we take it forward

Audience

Yes sir Namaste Thank you Thank you for a wonderful session I joined in late It was a little far from the other one So for India to become the India of the world we are talking of a 4 trillion economy 7 .3 trillion in 2030 it could easily be 10 trillion if we start trading our people we are not trading our people so if anyone of you are connected with policy makers Indians who travel outside India should suppose I travel to the UK and I work in the UK and I pay 40 % tax in the UK then 33 % of that 40 % should come back to India if I want to maintain my citizenship of India we will become 10 trillion economy just by this if Donald Trump can play everything we can also play so that’s question number 1 if anyone has any comment on that question number 2 is someone mentioned about motivation for education and the medium for motivation motivation comes and goes I think I would like to hear all of you all all of you all of you The power of confidence and does stereotyping in the education format that we have today, does it enhance confidence or does it kill confidence?

Because I meet so many students across the country. Last week I was in Hubli, I was in Belgaon, you know, I traveled to the depths of the country. And you’ll meet students who are fantabulous. People say we have Gen Z right now. I think we have Gen X, Gen Y, Gen Z, all 25 year old, depending upon the geography they are in right now. So fabulous students, but low on confidence. They are not from the city. So what does the education system do to multiply and enhance this confidence? If confidence increases, then it will do everything.

Piyush Nangru

So I’ll take the second question. I think as an education system, we have to move towards a more inclusive education system. from learning to creating to applying. And when we create, when we make things, there is a different kind of a dopamine release which is there and it also gives you confidence. If I have built a working prototype, which right now, currently the education system by and large is not really supportive of creating things. It is about learning things. And today’s discussion where we were, you joined in late, is that even this is not sufficient now. Now we not only need to create, but we also need to apply it. That is it useful? Okay, this thing exists and it works.

But is it useful for someone? This is the next level because with the AI coming in, this is where we need to be. But by and large, to answer your question, I think the confidence really comes by creating and not just by learning. I think the confidence is the key. I think the confidence is the key. I think the confidence is the key. So if we have more and more creating opportunities, building opportunities for our students across the board in education, across programs, across the board, I think the confidence level, and this is right from K -12, I am not even talking only of higher education, this is right from K -12, I think more and more creating is what we need to really instill.

Shweta Chaudhary

Thank you, Piyush. We would say that, yes, there was a time when knowledge was the reason of confidence. I know it, that’s why I am confident. But today it is that because I can build it, I can have the confidence. So friends, we are moving from the age of knowledge to the age of cognition, from the age of knowing something to the age of creating something. That is where we are here to discuss that it’s not just the artificial intelligence that is going to take us forward, but it’s all our collective cognitive ability that’s going to keep the Vixit Bharat or the India or all of us to come back to India. because yes, we were this what we are today but to get you all back to all of us we have to stay the way we are and that is something that is going to get us in that let us keep this intact the layer intact, the context the culture, the creativity of ours with that friends, I would thank my panelists for being here with us because we have to take this stage forward and the floor will be open for all of us to discuss we also have a team to have a product demo and a few of our friends to join hands together in taking creativity and cognition way forward in education so I would thank all of you for being such great listeners and be here till the time we have the next part of this session coming forward with the product demo I would thank my panelists to be here and would request for a group picture with all of us can we have a group picture so friends this is an inaugural or unveiling of one of the products that we say may I request the team also to come forward the team here may we have Ajay Rivalia sir, Viplav sir Nandaji, Garima ma ‘am Mansi the moment Vijay sir may we please have you here yes we have discussed about creativity and cognition this is going to be the tomorrow and this is something that is going to keep all of us intact so we are the torch bearers to it and we present to you a product which is going to make it better and educate all of us for creativity and cognition may I request our mentor Unkar sir to please join us Thank you, friends.

And now put your hands together for a product demo, which the AI -led education platform brings to all of us.

Speaker 5

becomes unique, adaptive and future ready. Powered by machine learning, LLMs and agentic AI, the platform intelligently maps growth, interests and creative potential. The platform fosters mentorship, discovery and meaningful skill development. So, from recommended courses to resource hubs and spotlight mentors, ENCODE the creative learning network. Create, connect, collaborate. Shaping personalized journeys for the creators of tomorrow. In a rapidly evolving world, SHAPED by ai creativity cognition and collaboration are the new foundations of learning where the creative learning network meets the future of intelligent education so this is not a static learning it is dynamic responsive and continuously evolving with the learner at the ai impact summit bharat 2026 learners educators and innovators engage with encode’s live ecosystem they can explore domains interact with creative pathways and experience how technology and creativity converge so Design the world you want to grow in.

A philosophy that places creativity, exploration and individuality at the core of education. With an intuitive interface and curated experiences, ENCODE enables learners to discover, engage and progress. At their own pace.

Shweta Chaudhary

Thank you. Thank you. May I please request your statement for how do you take it forward along with your current work that you are doing how will you add it up as a creativity layer to the systems

Piyush Nangru

No, I think this is what would separate a graduate from a real world professional because we really need this layer beyond this, everything wrote, everything monotonous is going to be taken up you know one gentleman asked about the timeline, that question is pretty real and I think partnerships like these will really help us future proof our students, so really looking forward

Shweta Chaudhary

Thank you sir, we also have our team from MEC Connect from across the borders with us to join hands The Next Gen Academy Thank you

Speaker 6

So, we are proud to contribute to this academic industry partnership by bringing the design -oriented courses to our coding students. Our focus is that our students should not only learn the coding. They should have idea about the design thinking, digital thinking. They should apply all these stuffs in the product development. We want to make them entrepreneur. So, definitely this product going to help us a lot. Thank you.

Shweta Chaudhary

Thank you, sir. We have a strong, strong education partner with us called Nimbus. Yes. Learning is already there with the academic institution. So, I saw this entire presentation was really great. One of the thing which, you know, I saw here was with learning, the accessibility should be there. So, we have solved the problem of accessibility. But with CodeEDU, I guess collaboration, providing the next -gen courses, exploring or connecting or creating network with developers. Definitely help the students to be industry ready. And, you know, really do wonders into this area.

Speaker 1

Thank you sir, the make connect We are going to I am happy to see that the product, to be frank we are too excited as well and I as she said that we will be taking it abroad and see that we have a platform of students so we will be definitely taking it and joining hands with them on this Thank you. So thanking our partners may I request our partners Ajay Rivalia sir, Vipulov sir to please come forward for this for marking this milestone time that we have we have good education partners with us who plan to take us forward not just across the country but across the continent and make our intent stronger make our intent stronger within MOU that yes together we stand to make the education more meaningful for the Vixit Bharat to come so may we have a picture to document this All of us are overwhelmed to stand on a stage which the government has provided us with, so we want to be a part of this milestone.

Thank you. Thank you. Piyush sir, Gyan Prakash sir, please come on stage. Thank you. so with this I will conclude this session I hope everyone enjoyed this insightful and wonderful session and everyone agreed with this AI may automate ecosystem and system but creativity determines direction thank you so much everyone

Related ResourcesKnowledge base sources related to the discussion topics (32)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Benchmark Gen Street has a three‑decade history of digitising environment, health, safety and sustainability for roughly 450 global subscribers and eight million daily users.”

The knowledge base states that Benchmark Gen Street has been digitizing environment health and safety for 30 years, working with 450 global subscribers and 8 million users, confirming the report’s figures.

Confirmedhigh

“The company has identified about 75 distinct AI use cases and is re‑architecting its SaaS platform into an “AI‑first” solution.”

Both S1 and S2 note that Benchmark Gen Street has developed 75 AI use cases and is transforming its SaaS system to be AI‑first, corroborating the claim.

Additional Contextmedium

“The first AI agent demonstrated is an observation‑reporting tool (referred to as “Jenny AI”/“Genie AI”) that lets workers scan a QR code or upload a photo to auto‑populate an observation form.”

The knowledge base mentions an “observation reporting” program used for engaging people in reporting (see S32), confirming the existence of such a tool, though it does not reference the specific names Jenny AI or Genie AI.

Additional Contextmedium

“AI agents act as digital co‑workers, accelerating routine data capture but still requiring human validation, especially when broader context is missing.”

S46 and S25 discuss the “context gap” and the need for human oversight when AI agents lack sufficient information, providing additional nuance to the report’s statement about human validation.

Additional Contextlow

“Benchmark Gen Street is moving toward “agentify‑ing” many of its AI use cases so autonomous agents can perform heavy‑lifting previously done by humans.”

S112 describes autonomous AI agents as the next phase of enterprise automation, supporting the notion that Benchmark Gen Street is adopting agentic automation for complex tasks.

External Sources (125)
S1
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — <strong>Naveen GV:</strong> out a long, lengthy form of information for that to be processed much later by another human…
S2
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — Speakers:Ashish Gupta, Piyush Nangru Speakers:Audience (Saurav), Piyush Nangru, Speaker 4 Speakers:Naveen GV, Piyush N…
S3
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S4
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S5
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S6
Knowledge Café: Youth building the digital future – WSIS+20 Review and Beyond 2025 — – **Speaker 5** – Role/expertise not specified Speaker 5: Sure. So what we talked about as a group is we discussed this…
S9
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S10
S12
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned.
S13
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S14
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S15
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S16
S17
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — -Speaker 4: Role/title not mentioned (made a brief interjection during the session)
S18
Educating for Viksit Bharat_ Why Creativity Cognition & Culture Matter — Professor Ashish Gupta from South Asian University, established by SAARC nations, brought an international perspective f…
S20
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — Speakers:Ashish Gupta, Piyush Nangru Speakers:Audience member, Ashish Gupta Speakers:Naveen GV, Piyush Nangru, Speaker…
S21
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — -Chandan: Colleague of Naveen GV who was mentioned to take over the presentation but appears to be the same person refer…
S22
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — Speakers:Naveen GV, Piyush Nangru, Speaker 4, Ashish Gupta, Speaker 2, Shweta Chaudhary Speakers:Naveen GV, Speaker 1 …
S23
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S24
Agenda item 6 — – Providing ongoing training for CERT team members, keeping them informed of new threats and defensive tactics. – Streng…
S26
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S27
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S28
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S29
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — -Shweta Chaudhary: Dr. Shweta Chaudhary, founder and director of CodeEDU, host of the session, leader working at interse…
S30
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — Evidence:Conclusion drawn from the entire panel discussion and the launch of ENCODE platform, which focuses on creativit…
S31
Keynote by Naveen Tewari Founder &amp; CEO, inMobi India AI Impact Summit — This appears to be a keynote presentation rather than an interactive discussion, with Naveen Tewari as the sole substant…
S32
https://app.faicon.ai/ai-impact-summit-2026/ai-for-safer-workplaces-smarter-industries_-transforming-risk-into-real-time-intelligence — But now let’s look at what the transformative way of looking into these hazards and risks. So I will go online. What you…
S33
Ethics in the Age of AI — The need to preserve traditional forms of interaction and learning is also brought up. The analysis suggests that apps a…
S34
Challenging the status quo of AI security — Babak Hodjat: Thank you very much, Sounil. Yeah, we came out here for two reasons, as cognizant, one, to get people invo…
S35
WS #110 AI Innovation Responsible Development Ethical Imperatives — Daisy Selematsela: Thank you. I just want to highlight on issues faced by academic libraries when we look at the integra…
S36
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Basma Ammari: I mean, this was every time there’s a tech revolution, historically, we do see, you know, loss of jobs, …
S37
NRIs MAIN SESSION: DATA GOVERNANCE — Additionally, they advocate for public forums to provide opportunities for users to give feedback, thus enhancing data q…
S38
AI: The Great Equaliser? — Transparency and quality of information are essential
S39
Open Forum #8 AFRICAN UNION OPEN FORUM 2024 — Speaker 4: that. Yes. So I would like to start by thanking director UNU Macau. Definitely at the African Union we valu…
S40
Bridging the AI innovation gap — LJ Rich: to invite our opening keynote. It’s a pleasure to invite to the stage the director of the Telecommunications St…
S41
Comprehensive Discussion Report: The Future of Artificial General Intelligence — The session examined critical questions surrounding the timeline for achieving Artificial General Intelligence (AGI) and…
S42
“Re” Generative AI: Using Artificial and Human Intelligence in tandem for innovation — Audience:Atomic bombs. Yeah. Well, that one they asked pretty early. Yeah. What I’m saying is, I think that AI is like a…
S43
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — Absolutely. And if AI tools like Praman and Sabha Sar and, you know, Pancham can help that strengthen, what best, you kn…
S44
Science AI & Innovation_ India–Japan Collaboration Showcase — Yeah, I think I think sort of agree to what everybody has talked about. I think with AI and the smartphone and we are on…
S45
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Anastasiya Kozakova:Thank you very much. It’s a pleasure to be here. I represent the civil society organization. I work …
S46
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — The Context GapThe second constraint centres on the context gap, which Patel illustrated through a compelling medical an…
S47
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — This observation added practical complexity to the discussion and demonstrated how theoretical policy frameworks can hav…
S48
AI and human creativity: Who should hold the brush? — Economic structures that value human creativity:If AI can flood the market with ‘good enough’ content at minimal cost, w…
S49
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Shetty made a philosophical point about AI’s limitations, noting that AI is based on past inferences: “AI couldn’t have …
S50
Invest India Fireside Chat — Discussion point:Education and Future Learning Models
S51
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Amanda describes Microsoft’s ambitious scaling of their skills development program in India, doubling their original com…
S52
Tailored AI agents improve work output—at a social cost — AI agents cansignificantly improve workplace productivitywhen tailored to individual personality types, according to new…
S53
Agentic AI in Focus Opportunities Risks and Governance — “We want standards.”[2]. “So we’re talking about standards.”[4]. “We’re talking about technical benchmarks.”[31]. “Don’t…
S54
Press Conference: Closing the AI Access Gap — Moreover, the speakers argue that AI can drive productivity, creativity, and overall economic growth. It has the capacit…
S55
Enhancing rather than replacing humanity with AI — People’s judgment remains crucial, particularly for decisions that involve values, context, or individual circumstances.
S56
Educating for Viksit Bharat_ Why Creativity Cognition & Culture Matter — Human intelligence will remain superior to artificial intelligence because creativity, cognition, and culture are unique…
S57
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — Creativity, cognition, and culture are key pillars that define human beings and will remain crucial differentiators
S58
AI and human creativity: Who should hold the brush? — For many established artists, AI has also become a collaborator rather than a threat. It can generate early concepts to …
S59
AI and the moral compass: What we can do vs what we should do — If technology can perform both creative and physical labour, what remains distinctly human is not the task itself, but t…
S60
Open Forum: A Primer on AI — One significant argument put forward is that AI lacks true imaginative capabilities. While AI is a great mimic, it is no…
S61
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S62
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Anne Flanagan: Hello, apologies that I’m not there in person today. I’m in transit at the moment, hence my picture on yo…
S63
Building the AI-Ready Future From Infrastructure to Skills — Thomas describes a governance model for AI systems where autonomous AI agents can operate at machine speed but require h…
S64
Deepfakes and the AI scam wave eroding trust — Calls for regulation are understandable, but policy has inherent limitations in this space. Deepfakes evolve faster than…
S65
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Marlon Avalos: So, please. Thank you, Ida-san. This is an immersive experience. I just lost my connection, and this is a…
S66
WS #110 AI Innovation Responsible Development Ethical Imperatives — Ricardo Israel Robles Pelayo: Thank you very much. Good afternoon, everyone. It is an honor to be here and share a refle…
S67
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S68
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S69
AI for Social Empowerment_ Driving Change and Inclusion — This discussion focused on the impact of artificial intelligence on labor markets and employment, featuring perspectives…
S70
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — Mr. Sher Verick:Great. Well, thank you very much. It’s a real pleasure to be with you here today. I think Janine updated…
S71
How AI Is Transforming Indias Workforce for Global Competitivene — Impact:This grounded the discussion in practical reality, shifting focus from theoretical AI capabilities to actual ente…
S72
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Explanation:It was unexpected to see both regulatory leaders emphasizing that AI development should not be confined to I…
S73
Day 0 Event #251 Large Models and Small Player Leveraging AI in Small States and Startups — Karianne Tung: Good afternoon, everyone. It is a pleasure being here and to start this very interesting discussion on le…
S74
Driving Indias AI Future Growth Innovation and Impact — Explanation:The strong consensus between industry and government on prioritizing mass accessibility over premium service…
S75
The Role of Government and Innovators in Citizen-Centric AI — Lucilla emphasizes that having the technical components (models, computing capacity, datasets) is not sufficient – there…
S76
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S77
WS #283 AI Agents: Ensuring Responsible Deployment — Carter emphasizes that safeguarding agentic AI requires putting users in control through granular preferences about data…
S78
The Agent Universe From Automation to Autonomy — Summary:The main areas of disagreement center around workforce development approaches (formal training vs. self-directed…
S79
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — A bumblebee cannot fly but it still does. The thing is that when this statement was made in 1930, we understood very lit…
S80
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — Artificial intelligence | Social and economic development Benchmark Gen Street has been digitizing environment health a…
S81
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — out a long, lengthy form of information for that to be processed much later by another human in the loop, per se, to rea…
S82
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Thank you very much, Rebecca, and also very much appreciate Partnership on AI for the invitation. When this series of su…
S83
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Anastasiya Kozakova:Thank you very much. It’s a pleasure to be here. I represent the civil society organization. I work …
S84
Research shows AI complements, not replaces, human work — AI headlines often flip between hype and fear, but the truth is more nuanced. Much research is misrepresented, with task…
S85
As AI agents proliferate, human purpose is being reconsidered — As AI agentsrapidly evolvefrom tools to autonomous actors, experts are raising existential questions about human value a…
S86
WS #283 AI Agents: Ensuring Responsible Deployment — Wingfield challenged Carter’s framing of tasks like financial management as routine, arguing that “things like financial…
S87
Educating for Viksit Bharat_ Why Creativity Cognition & Culture Matter — Human intelligence will remain superior to artificial intelligence because creativity, cognition, and culture are unique…
S88
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Shetty made a philosophical point about AI’s limitations, noting that AI is based on past inferences: “AI couldn’t have …
S89
AI and human creativity: Who should hold the brush? — This simple statement, which circulated widely on social media recently, captures a profound anxiety rippling through th…
S90
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Discussion point:Ecosystem-wide skill requirements Discussion point:Educational program expansion
S91
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — This insight recognizes that AI education is happening organically through accessible tools rather than just formal educ…
S92
Invest India Fireside Chat — Discussion point:Education and Future Learning Models
S93
Tailored AI agents improve work output—at a social cost — AI agents cansignificantly improve workplace productivitywhen tailored to individual personality types, according to new…
S94
Agentic AI in Focus Opportunities Risks and Governance — Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin May…
S95
Opening keynote — Doreen Bogdan-Martin:Good morning, and welcome to the AI for Good Global Summit. Let me start by thanking our more than …
S96
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — Piotr Adamczewski:Thank you Martina, I totally agree that we have to discuss the problem of using AI, I have to also adm…
S97
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — The discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportuni…
S98
GermanAsian AI Partnerships Driving Talent Innovation the Future — Dr. Kofler acknowledges that people have legitimate fears about AI displacing jobs and emphasizes the importance of addr…
S99
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S100
Thinking through Augmentation — However, there is also discussion surrounding the risks and concerns associated with AI. Some believe that it could lead…
S101
Upholding Human Rights in the Digital Age: Fostering a Multistakeholder Approach for Safeguarding Human Dignity and Freedom for All — With the advent of artificial intelligence, jobs are changing, and there are concerns that labour protections are being …
S102
Elevating AI skills for all — The tone is consistently optimistic, enthusiastic, and collaborative throughout. The speaker maintains an upbeat, missio…
S103
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S104
“Re” Generative AI: Using Artificial and Human Intelligence in tandem for innovation — Furthermore, this approach echoes the ethos of SDG 17, Partnerships for the Goals, recognising that multifaceted collabo…
S105
Networking Session #60 Risk &amp; impact assessment of AI on human rights &amp; democracy — The tone was largely collaborative and optimistic, with speakers building on each other’s points and emphasizing the imp…
S106
AI (and) education: Convergences between Chinese and European pedagogical practices — The discussion maintained a collaborative and optimistic tone throughout, characterized by intellectual curiosity and co…
S107
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S108
AI for food systems — The tone throughout the discussion was consistently formal, optimistic, and collaborative. It maintained a ceremonial qu…
S109
AI Innovation in India — The tone was consistently celebratory, inspirational, and optimistic throughout the discussion. Speakers expressed pride…
S110
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S111
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S112
Autonomous AI agents are the next phase of enterprise automation — Organisations across sectors areturning to agentic automation—an emerging class of AI systems designed to think, plan, a…
S113
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S114
SAP unveils new models and tools shaping enterprise AI — The Germanmultinationalsoftware company, SAP,usedits TechEd event in Berlin to reveal a significant expansion of its Bus…
S115
Agentic Intelligence set to automate complex tasks with human oversight — Thomson Reuters hasunveiled a new AI platform, Agentic Intelligence, designed to automate complex workflows for professi…
S116
Living with the genie: Responsible use of genAI in content creation — Halima Ismail:Can I? Yeah. So we can solve this by the input. It’s based on the input. For example, if we are detecting …
S117
Protecting vulnerable groups online from harmful content – new (technical) approaches — The speaker, evidently in a coordinating role, commenced with vital updates for the attendees, underlining their intenti…
S118
Harnessing Collective AI for India’s Social and Economic Development — So, yeah, I think we’ve heard, I think, a little bit from some AI leaders about a next wave of AI that will be agentic, …
S119
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S120
AI redefines how cybersecurity teams detect and respond — AI, especially generative models, has becomea staple in cybersecurity operations, extending its role from traditional ma…
S121
Delegated decisions, amplified risks: Charting a secure future for agentic AI — Meredith Whittaker: Yeah. Well, I think governments and private citizens should be asking these questions. Do not feel l…
S122
Annex 5 — corrective and preventive action ( CAPA , also sometimes called corrective action/preventive action ) refers to the …
S123
ECOWAS Regional Critical Infrastructure Protection Policy — proposes a list of preventive, reactive and proactive measures that can be implemented;
S124
tABle of Contents — Part III makes recommendations to maximize the use of broadband to address national priorities. This includes reforming …
S125
Annex to the Government’s Proposal — – defining and planning the goals (according to their orientation, scope and time span); – supporting, forecasting and m…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
N
Naveen GV
1 argument144 words per minute564 words234 seconds
Argument 1
AI‑first SaaS transformation for EHS, with 75 use cases and autonomous agents (Naveen GV)
EXPLANATION
Naveen describes how Benchmark Gen Street is converting its long‑standing EHS SaaS platform into an AI‑first solution. Over the past three years the company has built around 75 AI use cases and is now focusing on ‘agentifying’ these capabilities to automate safety workflows.
EVIDENCE
He explains that the challenge over the last three years has been to transform a SaaS-based system into an AI-first product, noting the existence of about 75 different AI use cases and the move towards autonomous agents that deliver value for engagement [5-9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session overview describes Benchmark GenStreet’s shift to an AI-first platform serving 450 global subscribers and highlights the development of dozens of AI use cases, confirming the transformation and scale mentioned <a href="https://dig.watch/event/india-ai-impact-summit-2026/ai-for-safer-workplaces-smarter-industries-transforming-risk-into-real-time-intelligence/" target="_blank" class="diplo-source-cite" title="AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence" data-source-title="AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence" data-source-snippet="Naveen GV: out a long, lengthy form of information for that to be processed much later by another human in the loop, per se, to really looking at how do we get an experiential learnin”>[S1].
MAJOR DISCUSSION POINT
AI‑first SaaS transformation
AGREED WITH
Speaker 1, Speaker 4
DISAGREED WITH
Speaker 1, Speaker 4
S
Speaker 1
2 arguments146 words per minute3270 words1341 seconds
Argument 1
AI observation reporting that auto‑fills hazard forms from photos or voice input (Speaker 1)
EXPLANATION
Speaker 1 showcases the Observation Reporting program where workers can capture a photo or speak a description of a hazard, and an AI agent called Jenny AI analyses the input and automatically completes the safety form. This reduces manual data entry and speeds up reporting.
EVIDENCE
He demonstrates that a worker can scan a QR code or upload a photo, which is sent to the Jenny AI agent that analyses the image, identifies hazards such as missing fall-protection equipment, and fills the entire form on the user’s behalf; the same workflow is shown for Hindi voice input, where the AI transcribes and structures the report [22-26][30-36][46-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The demo of the Observation Reporting program shows workers uploading photos or speaking Hindi descriptions, with the Jenny AI agent analysing the input and completing the safety form automatically [S2][S32].
MAJOR DISCUSSION POINT
AI observation reporting
AGREED WITH
Speaker 4, Naveen GV
DISAGREED WITH
Naveen GV, Speaker 4
Argument 2
AI agents accelerate reporting but lack broader context, requiring human follow‑up questions (Speaker 1)
EXPLANATION
While the AI can auto‑populate most of the safety form, it cannot infer details not present in the image, such as the exact working height, and therefore asks the user follow‑up questions to obtain missing information.
EVIDENCE
He notes that the AI only sees the photo and therefore cannot determine specifics like the height at which workers are operating, prompting it to ask follow-up questions before finalising the report [39-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion notes explain that agents without full context may make incorrect guesses and therefore ask follow-up questions to obtain missing details such as working height [S25][S33].
MAJOR DISCUSSION POINT
Limitations of AI context
AGREED WITH
Speaker 4, Naveen GV
S
Speaker 3
1 argument145 words per minute649 words267 seconds
Argument 1
Uncertainty about when AI will surpass human intelligence; fear of job loss (Speaker 3)
EXPLANATION
Speaker 3 points out that many students and participants are unsure about the timeline for AI overtaking human capabilities, emphasizing that no predictive models exist and that the future impact remains unknown, which fuels anxiety about job displacement.
EVIDENCE
He observes that youngsters are unclear about when AI might overtake human intelligence, stating that there are no mathematical models to predict the timeline and that the future impact is uncertain [285-290].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Participants expressed anxiety about AI’s future impact and job displacement, noting the lack of predictive models for AI supremacy [S36][S2].
MAJOR DISCUSSION POINT
Timeline uncertainty for AI supremacy
DISAGREED WITH
Audience, Piyush Nangru, Shweta Chaudhary
S
Speaker 4
2 arguments152 words per minute1257 words494 seconds
Argument 1
Data quality and continuous human‑AI engagement are essential for reliable outcomes (Speaker 4)
EXPLANATION
Speaker 4 stresses that AI outputs are only as good as the data fed into them, warning that poor data leads to unreliable results and highlighting the need for ongoing human‑AI interaction to improve performance.
EVIDENCE
He explains that AI depends on the quality of data, noting that unreliable data produces unreliable results, and stresses the necessity of continuous engagement between humans and AI to enhance outcomes [209-216].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data-governance discussions underline that AI outputs depend on data quality and require ongoing human-AI interaction to improve reliability [S37][S38].
MAJOR DISCUSSION POINT
Importance of data quality
AGREED WITH
Ashish Gupta, Shweta Chaudhary
DISAGREED WITH
Naveen GV, Speaker 1
Argument 2
Government digital‑skilling initiatives and marketplaces aim to bring AI tools to all citizens, bridging urban‑rural gaps (Speaker 4)
EXPLANATION
Speaker 4 describes a government‑run marketplace where any GST‑registered vendor can register, have their products verified, and participate in procurement, and also mentions digital‑skilling portals that provide free AI‑readiness training, illustrating efforts to extend AI access nationwide.
EVIDENCE
He outlines a platform where GST-registered vendors can onboard, undergo verification, and be part of a procurement marketplace, and cites government digital-skilling portals offering free AI readiness training to citizens, demonstrating a strategy to reach both urban and rural users [379-383][332-337].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Government-run marketplaces for GST-registered vendors and free AI-readiness training portals are cited as efforts to extend AI access nationwide, especially to rural areas [S43][S44].
MAJOR DISCUSSION POINT
Digital‑skilling and AI marketplace
AGREED WITH
Ashish Gupta, Piyush Nangru, Shweta Chaudhary, Audience
A
Audience
1 argument148 words per minute658 words265 seconds
Argument 1
Audience query on timeline for AI overtaking human intelligence (Audience)
EXPLANATION
An audience member asks whether there is a specific timeline after which AI will surpass human intelligence, seeking a bound on when AI might become superior.
EVIDENCE
The audience asks, “you mean to say there will be a timeline where this human intelligence will cease to supersede… as AI improves is there a timeline…?” [261].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The comprehensive discussion on AGI explicitly addresses questions about when AI might surpass human intelligence, providing context for the audience’s timeline query [S41][S2].
MAJOR DISCUSSION POINT
Audience question on AI timeline
AGREED WITH
Ashish Gupta, Piyush Nangru, Speaker 4, Shweta Chaudhary
DISAGREED WITH
Speaker 3, Piyush Nangru, Shweta Chaudhary
S
Speaker 2
1 argument111 words per minute357 words191 seconds
Argument 1
Creativity is the one skill AI cannot originate; it will remain the decisive human advantage (Speaker 2)
EXPLANATION
Speaker 2 argues that while AI can generate content, it cannot originate lived experiences or true creativity, positioning creativity as the enduring human strength that AI cannot replace.
EVIDENCE
He states that AI can generate but cannot originate lived experiences, emphasizing that creativity is the decisive human advantage and that good design is about solutions rather than mere drawings [149-155].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Commentary on creativity emphasizes that AI can generate but not originate lived experiences, positioning creativity as a uniquely human strength [S18].
MAJOR DISCUSSION POINT
Creativity as uniquely human
AGREED WITH
Naveen GV, Shweta Chaudhary, Piyush Nangru, Ashish Gupta, Speaker 4
DISAGREED WITH
Piyush Nangru, Ashish Gupta
P
Piyush Nangru
2 arguments150 words per minute995 words396 seconds
Argument 1
Creativity, cognition, and culture are the three pillars that define human capital and future progress (Piyush Nangru)
EXPLANATION
Piyush identifies creativity, cognition and culture as the three fundamental pillars of human capital, explaining that while coding is now a baseline skill, true value lies in applying creativity, and that cultural diversity enriches cognition.
EVIDENCE
He says these three pillars define any human being, notes that coding is now table-stakes and that the application of creativity matters, and highlights the importance of cultural heritage and multilingualism in shaping cognition [185-190][191-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same source outlines creativity, cognition and culture as the three fundamental pillars of human capital and future development [S18].
MAJOR DISCUSSION POINT
Pillars of human capital
AGREED WITH
Naveen GV, Speaker 2, Shweta Chaudhary, Ashish Gupta, Speaker 4
DISAGREED WITH
Speaker 2, Ashish Gupta
Argument 2
AI acts as a democratizing tool, enabling self‑learning and solopreneurship for rural and economically‑backward communities (Piyush Nangru)
EXPLANATION
Piyush claims that AI serves as a powerful democratizing force, allowing individuals in tier‑3 towns and rural areas to self‑motivate, learn, and launch solopreneur ventures such as building websites, creating content, and marketing independently.
EVIDENCE
He describes AI as a democratizing tool that enables self-motivation and solopreneurship in rural and economically-backward communities, allowing people to create websites, generate creative content, and market themselves without extensive resources [384-388].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of AI-driven platforms that support rural entrepreneurship, self-learning and content creation illustrate AI’s democratizing role [S43][S44].
MAJOR DISCUSSION POINT
AI democratization for underserved regions
AGREED WITH
Ashish Gupta, Speaker 4, Shweta Chaudhary, Audience
DISAGREED WITH
Speaker 3, Audience, Shweta Chaudhary
S
Shweta Chaudhary
1 argument156 words per minute1664 words638 seconds
Argument 1
Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary)
EXPLANATION
Shweta emphasizes that human intelligence will remain superior to AI and calls for the preservation of originality, creativity, and cultural identity as essential human traits in the AI era.
EVIDENCE
She thanks Umang for setting the stage, asks why human intelligence will stay in the age of AI, and stresses that originality and humanness must be kept intact, asserting that human intelligence will continue to supersede AI [173-176][201-204].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Session remarks stress that human intelligence will remain superior to AI and call for preserving originality and cultural identity <a href="https://dig.watch/event/india-ai-impact-summit-2026/ai-for-safer-workplaces-smarter-industries-transforming-risk-into-real-time-intelligence/" target="_blank" class="diplo-source-cite" title="AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence" data-source-title="AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence" data-source-snippet="Naveen GV: out a long, lengthy form of information for that to be processed much later by another human in the loop, per se, to really looking at how do we get an experiential learnin”>[S1][S33].
MAJOR DISCUSSION POINT
Human intelligence vs AI
AGREED WITH
Ashish Gupta, Speaker 4
DISAGREED WITH
Speaker 2, Ashish Gupta
A
Ashish Gupta
1 argument153 words per minute1522 words595 seconds
Argument 1
Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta)
EXPLANATION
Ashish outlines a transition from pure knowledge acquisition to applied intelligence, urging that AI be used ethically and responsibly in education, and highlighting the role of large language models in supporting learning while maintaining ethical standards.
EVIDENCE
He discusses the new ‘orange economy’, the shift from knowledge to applied intelligence, and stresses the importance of ethical and responsible AI use in learning, providing examples of AI-assisted education and the need for responsible deployment [224-230][301-339].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Issues raised by academic libraries highlight the need for responsible and ethical integration of AI in education and learning environments [S35].
MAJOR DISCUSSION POINT
Applied intelligence and AI ethics in education
AGREED WITH
Speaker 4, Shweta Chaudhary
DISAGREED WITH
Speaker 2, Piyush Nangru
S
Speaker 5
1 argument69 words per minute176 words152 seconds
Argument 1
ENCODE platform provides personalized, AI‑driven creative learning pathways and mentorship (Speaker 5)
EXPLANATION
Speaker 5 introduces ENCODE, a platform powered by machine learning, large language models and agentic AI that maps learners’ growth, interests and creative potential, offering mentorship, curated resources and personalized learning journeys.
EVIDENCE
He describes ENCODE as powered by ML, LLMs and agentic AI, mapping growth and creative potential, fostering mentorship, discovery and skill development through personalized courses and resource hubs [446-454].
MAJOR DISCUSSION POINT
AI‑driven creative learning platform
S
Speaker 6
1 argument162 words per minute71 words26 seconds
Argument 1
Design‑oriented courses integrate AI with entrepreneurship training to produce industry‑ready graduates (Speaker 6)
EXPLANATION
Speaker 6 explains that their institution combines design thinking with coding education, teaching students design and digital thinking so they can apply these skills in product development and become entrepreneurs ready for industry.
EVIDENCE
He states that the focus is on teaching students not only coding but also design thinking and digital thinking, enabling them to apply these skills in product development and entrepreneurship, thereby making them industry-ready [459-465].
MAJOR DISCUSSION POINT
Design‑oriented AI education
Agreements
Agreement Points
AI is a powerful enabler but human creativity, cognition and culture remain essential and must guide AI outcomes
Speakers: Naveen GV, Speaker 2, Shweta Chaudhary, Piyush Nangru, Ashish Gupta, Speaker 4
AI‑first SaaS transformation for EHS, with 75 use cases and autonomous agents (Naveen GV) Creativity is the one skill AI cannot originate; it will remain the decisive human advantage (Speaker 2) Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary) Creativity, cognition, and culture are the three pillars that define human capital and future progress (Piyush Nangru) Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta) Data quality and continuous human‑AI engagement are essential for reliable outcomes (Speaker 4)
All these speakers agree that AI should be viewed as a tool that amplifies safety, education and business processes, but the ultimate direction, quality and ethical use depend on uniquely human traits such as creativity, cognition, culture and continuous human oversight [5-9][149-155][173-176][201-204][185-190][191-199][301-307][209-216].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with human-centric AI principles that stress creativity, cognition and culture as uniquely human pillars that must steer AI development, as highlighted in expert commentaries on the need for human judgment and the limits of machine imagination [S55][S56][S57][S60].
AI can democratize access to services and bridge urban‑rural digital divides
Speakers: Speaker 4, Piyush Nangru, Speaker 1
Government digital‑skilling initiatives and marketplaces aim to bring AI tools to all citizens, bridging urban‑rural gaps (Speaker 4) AI acts as a democratizing tool, enabling self‑learning and solopreneurship for rural and economically‑backward communities (Piyush Nangru) AI observation reporting that auto‑fills hazard forms from photos or voice input (Speaker 1)
The government representative highlights nationwide AI-readiness portals and vendor marketplaces, the private sector speaker stresses AI’s role in empowering rural entrepreneurs, and the product demo shows multilingual, QR-code based reporting that works for non-English speakers, together signalling a shared belief that AI can be made widely accessible [379-383][332-337][384-388][46-58].
Building digital skills and capacity is essential for effective AI adoption
Speakers: Ashish Gupta, Piyush Nangru, Speaker 4, Shweta Chaudhary, Audience
Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta) AI acts as a democratizing tool, enabling self‑learning and solopreneurship for rural and economically‑backward communities (Piyush Nangru) Government digital‑skilling initiatives and marketplaces aim to bring AI tools to all citizens, bridging urban‑rural gaps (Speaker 4) Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary) Audience query on timeline for AI overtaking human intelligence (Audience)
Multiple participants stress that continuous education, ethical training and government-backed skilling programmes are required so that workers, students and citizens can harness AI responsibly and remain competitive [301-307][338-339][332-337][173-176][261].
POLICY CONTEXT (KNOWLEDGE BASE)
Skill-building is emphasized in workforce transformation research and policy recommendations that call for AI-ready societies, underscoring that human capabilities remain critical for successful AI uptake [S71][S75][S78].
AI systems have inherent limitations and require human validation and quality data
Speakers: Speaker 1, Speaker 4, Naveen GV
AI observation reporting that auto‑fills hazard forms from photos or voice input (Speaker 1) AI agents accelerate reporting but lack broader context, requiring human follow‑up questions (Speaker 1) Data quality and continuous human‑AI engagement are essential for reliable outcomes (Speaker 4) AI‑first SaaS transformation for EHS, with 75 use cases and autonomous agents (Naveen GV)
The demo acknowledges that AI can auto-populate forms but cannot infer missing details such as exact working height, prompting follow-up queries; this mirrors the broader point that AI outputs depend on data quality and must be overseen by humans [39-43][209-216][5-9].
POLICY CONTEXT (KNOWLEDGE BASE)
Governance models that mandate human validation of AI outputs and high-quality data are advocated in responsible AI frameworks, highlighting the need for oversight to mitigate AI’s intrinsic constraints [S55][S63][S76][S77].
Ethical and responsible use of AI is a shared priority
Speakers: Ashish Gupta, Speaker 4, Shweta Chaudhary
Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta) Data quality and continuous human‑AI engagement are essential for reliable outcomes (Speaker 4) Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary)
All three stress that AI must be deployed with ethical safeguards, high-quality data and a focus on preserving uniquely human values, underscoring a common normative stance [301-307][338-339][209-216][173-176].
POLICY CONTEXT (KNOWLEDGE BASE)
This priority mirrors international ethical AI commitments and policy toolkits that stress responsible development, transparency and accountability as core principles [S55][S66][S67][S69].
Similar Viewpoints
Both argue that creativity (and the broader trio of cognition and culture) is the core human strength that AI cannot replace, positioning it as the decisive competitive advantage [149-155][185-190][191-199].
Speakers: Speaker 2, Piyush Nangru
Creativity is the one skill AI cannot originate; it will remain the decisive human advantage (Speaker 2) Creativity, cognition, and culture are the three pillars that define human capital and future progress (Piyush Nangru)
Both see AI as a democratizing force that can close the urban‑rural divide and empower underserved populations through skill‑building and entrepreneurship [379-383][332-337][384-388].
Speakers: Speaker 4, Piyush Nangru
Government digital‑skilling initiatives and marketplaces aim to bring AI tools to all citizens, bridging urban‑rural gaps (Speaker 4) AI acts as a democratizing tool, enabling self‑learning and solopreneurship for rural and economically‑backward communities (Piyush Nangru)
Both stress that AI deployment must be coupled with high‑quality data, ethical guidelines and continuous human involvement to ensure trustworthy outcomes [301-307][338-339][209-216].
Speakers: Ashish Gupta, Speaker 4
Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta) Data quality and continuous human‑AI engagement are essential for reliable outcomes (Speaker 4)
Unexpected Consensus
Business leader and public‑sector representative both prioritize AI‑driven democratization of services
Speakers: Naveen GV, Speaker 4
AI‑first SaaS transformation for EHS, with 75 use cases and autonomous agents (Naveen GV) Government digital‑skilling initiatives and marketplaces aim to bring AI tools to all citizens, bridging urban‑rural gaps (Speaker 4)
While Naveen speaks from a private-sector, profit-driven safety platform perspective, and Speaker 4 from a government policy angle, both converge on the belief that AI should be scaled to reach all users, including remote and underserved groups – a convergence not explicitly anticipated given their differing organisational motives [5-9][379-383][332-337].
Overall Assessment

The discussion shows a strong, cross‑sectoral consensus that AI is a transformative enabler but must be paired with human creativity, high‑quality data, ethical safeguards and widespread digital skills. Participants uniformly endorse capacity‑building, inclusive access and continuous human oversight as prerequisites for responsible AI deployment.

High consensus – the shared viewpoints cut across business, government, academia and civil society, indicating that future policies should focus on education, data governance, ethical frameworks and inclusive infrastructure to realise AI’s benefits while preserving human agency.

Differences
Different Viewpoints
Timeline and eventual supremacy of AI over human intelligence
Speakers: Speaker 3, Audience, Piyush Nangru, Shweta Chaudhary
Uncertainty about when AI will surpass human intelligence; fear of job loss (Speaker 3) Audience query on timeline for AI overtaking human intelligence (Audience) AI acts as a democratizing tool, enabling self‑learning and solopreneurship for rural and economically‑backward communities (Piyush Nangru) Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary)
Speaker 3 says there is no predictive model for when AI will outstrip humans, expressing uncertainty [285-290]; the audience explicitly asks whether such a timeline exists [261]; Piyush acknowledges a timeline may be now but says it is not easy to answer [278-279]; Shweta counters that human intelligence will remain superior, implying AI will not overtake [173-176][201-204].
Extent of AI autonomy versus need for human oversight and data quality
Speakers: Naveen GV, Speaker 1, Speaker 4
AI‑first SaaS transformation for EHS, with 75 use cases and autonomous agents (Naveen GV) AI observation reporting that auto‑fills hazard forms from photos or voice input (Speaker 1) Data quality and continuous human‑AI engagement are essential for reliable outcomes (Speaker 4)
Naveen pushes for a platform-wide AI-first transformation with autonomous agents handling safety workflows [5-9]; Speaker 1 demonstrates an AI observation-reporting tool that can auto-populate forms but admits it lacks broader context and must ask follow-up questions for missing details [39-43]; Speaker 4 warns that AI outputs depend on the quality of data and require ongoing human-AI interaction to stay reliable [209-216].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between autonomous AI agents and mandatory human oversight is a recurring theme in AI governance roadmaps and safety recommendations, which call for human validation before AI-driven changes are enacted [S63][S77][S76][S55].
Impact of AI on employment and the relevance of human skills
Speakers: Speaker 2, Shweta Chaudhary, Ashish Gupta
Creativity is the one skill AI cannot originate; it will remain the decisive human advantage (Speaker 2) Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary) Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta)
Speaker 2 predicts that resumes will become obsolete by 2030 because AI will do everything faster, better and cheaper [158-163]; Shweta argues that human intelligence will stay superior and originality must be kept intact, suggesting AI will not replace humans [173-176][201-204]; Ashish stresses that AI should be used ethically as an assistive tool while human decision-making remains central [301-339].
POLICY CONTEXT (KNOWLEDGE BASE)
Research from the ILO and other labor studies highlights AI’s mixed effects on jobs and stresses that human skills remain essential, providing a historical backdrop to current debates on workforce relevance [S69][S70][S71][S78].
Whether AI can generate or support creativity versus it being uniquely human
Speakers: Speaker 2, Piyush Nangru, Ashish Gupta
Creativity is the one skill AI cannot originate; it will remain the decisive human advantage (Speaker 2) Creativity, cognition, and culture are the three pillars that define human capital and future progress (Piyush Nangru) Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta)
Speaker 2 claims AI can generate but cannot originate lived experiences, positioning creativity as uniquely human [149-155]; Piyush highlights creativity, cognition and culture as essential pillars while also stating that AI democratizes learning and can help people create, implying AI can support creative processes [185-190][191-199]; Ashish describes a shift to applied intelligence where AI assists but human creativity still drives solutions [224-230][301-339].
POLICY CONTEXT (KNOWLEDGE BASE)
Scholarly discussions differentiate AI-assisted creativity from true human imagination, noting AI’s role as a collaborator but not a substitute for uniquely human creative insight [S56][S57][S58][S60].
Unexpected Differences
Resumes will die vs human intelligence remains superior
Speakers: Speaker 2, Shweta Chaudhary
Creativity is the one skill AI cannot originate; it will remain the decisive human advantage (Speaker 2) Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary)
Speaker 2 makes a bold prediction that resumes will become obsolete by 2030 because AI will replace most skills [158-163], whereas Shweta asserts that human intelligence will continue to outrank AI and that originality must be kept intact, implying that such a collapse of human-based resumes is unlikely [173-176][201-204].
Full AI autonomy vs need for continuous human data oversight
Speakers: Naveen GV, Speaker 4
AI‑first SaaS transformation for EHS, with 75 use cases and autonomous agents (Naveen GV) Data quality and continuous human‑AI engagement are essential for reliable outcomes (Speaker 4)
Naveen promotes a vision of a completely autonomous, AI-first safety platform [5-9], while Speaker 4 cautions that AI results are only as good as the data fed into them and that ongoing human-AI interaction is required to maintain reliability [209-216]. The contrast between a fully autonomous system and a data-quality-driven, human-in-the-loop approach was not anticipated given their shared focus on safety AI.
POLICY CONTEXT (KNOWLEDGE BASE)
Responsible AI deployment guidelines repeatedly call for continuous human oversight of data and model behavior, even in highly autonomous systems, to ensure accountability and prevent unintended outcomes [S63][S77][S55].
Uncertainty about AI timeline vs claim of immediate relevance
Speakers: Speaker 3, Piyush Nangru
Uncertainty about when AI will surpass human intelligence; fear of job loss (Speaker 3) AI acts as a democratizing tool, enabling self‑learning and solopreneurship for rural and economically‑backward communities (Piyush Nangru)
Speaker 3 stresses that there is no model to predict when AI will overtake humans, highlighting uncertainty [285-290]; Piyush, however, suggests that AI is already a democratizing force with immediate impact, stating that the timeline is “now” though not easy to answer [278-279]. The clash between a stance of uncertainty and a claim of present-day relevance was unexpected.
Overall Assessment

The discussion revealed several substantive disagreements: (1) the timing and possibility of AI surpassing human intelligence, (2) the degree of autonomy appropriate for AI systems versus the necessity of human oversight and data quality, (3) the magnitude of AI’s impact on employment and whether human skills will become obsolete, and (4) whether AI can ever generate genuine creativity. While participants shared a common optimism about AI’s potential, they diverged sharply on its future trajectory and the safeguards required.

Moderate to high. The disagreements span technical implementation (autonomy vs data quality), socio‑economic forecasts (job displacement, resume relevance), and philosophical views on creativity. These divergences suggest that consensus on policy, governance, and investment priorities will require careful negotiation, especially in areas of AI governance, capacity building, and safeguarding human rights.

Partial Agreements
Both aim to improve safety reporting through AI; Naveen focuses on a platform‑wide AI‑first transformation with many use cases [5-9], while Speaker 1 demonstrates a specific observation‑reporting tool that auto‑fills forms from photos or voice [22-26][30-36][46-58].
Speakers: Naveen GV, Speaker 1
AI‑first SaaS transformation for EHS, with 75 use cases and autonomous agents (Naveen GV) AI observation reporting that auto‑fills hazard forms from photos or voice input (Speaker 1)
Both seek universal AI access; Speaker 4 describes a GST‑based government marketplace and free AI‑readiness training portals to reach all citizens [379-383][332-337], whereas Piyush emphasizes AI as a tool that enables individuals in tier‑3 towns to learn and launch solopreneur ventures [384-388].
Speakers: Speaker 4, Piyush Nangru
Government digital‑skilling initiatives and marketplaces aim to bring AI tools to all citizens, bridging urban‑rural gaps (Speaker 4) AI acts as a democratizing tool, enabling self‑learning and solopreneurship for rural and economically‑backward communities (Piyush Nangru)
Both stress safeguarding human values in AI adoption; Ashish focuses on ethical, responsible AI use and a shift to applied intelligence in education [224-230][301-339], while Shweta emphasizes preserving originality, creativity and cultural identity as AI becomes pervasive [173-176][201-204].
Speakers: Ashish Gupta, Shweta Chaudhary
Shift from knowledge to applied intelligence; need for ethical, responsible AI use in learning (Ashish Gupta) Human intelligence will continue to supersede AI; preserving humanness is vital (Shweta Chaudhary)
Takeaways
Key takeaways
Benchmark Gen Street is transitioning its 30‑year EHS SaaS platform to an AI‑first solution, with ~75 use cases and autonomous agents that can auto‑fill hazard reports from photos or voice. AI agents act as digital co‑workers: they accelerate data capture and analysis but still require human validation and contextual follow‑up. Human strengths—creativity, cognition, and culture—are viewed as the enduring advantage over AI, especially for problem‑solving and design. Education must shift from pure knowledge acquisition to applied intelligence, ethical AI use, and creativity‑driven learning; platforms like ENCODE aim to deliver personalized, AI‑guided learning pathways. AI is positioned as a democratizing tool that can empower rural, economically‑backward, and under‑skilled populations through self‑learning, solopreneurship, and government digital‑skilling initiatives. There is widespread uncertainty and fear about AI surpassing human intelligence and its impact on jobs, prompting calls for continuous human‑AI collaboration and responsible governance.
Resolutions and action items
Benchmark Gen Street will prioritize development of autonomous AI agents to make the platform fully agentic within the next year. The presenters invited attendees to visit their booth for personalized discussions on AI implementation. Partnerships were announced between the AI safety platform, educational entities (e.g., ENCODE, Nimbus, MEC Connect) and government initiatives to integrate AI‑driven learning and skilling programs. Commitment to continue ethical, responsible AI training for educators and students, leveraging existing government digital‑skilling portals. Plans to conduct further product demos and formalize MOUs with academic and industry partners.
Unresolved issues
Exact timeline when AI might surpass human intelligence and the implications for employment. How to scale AI literacy and training to the entire Indian population (≈140 crore people), especially those without internet access or formal education. Mechanisms to ensure data quality and continuous human‑AI feedback loops for reliable safety predictions. Specific curriculum changes needed in schools and universities to embed AI, creativity, and ethics effectively. Details on how tax remittance for Indian expatriates should be structured to support the national economy (audience query). Methods to systematically build confidence in students from under‑privileged backgrounds.
Suggested compromises
Position AI as an augmenting tool rather than a replacement, maintaining human oversight for context and ethical decisions. Combine AI‑driven automation with human‑led validation (e.g., follow‑up questions in hazard reporting, 5‑Why analysis). Promote a balanced narrative that acknowledges AI’s efficiency while emphasizing the irreplaceable value of human creativity, cognition, and cultural insight. Encourage collaborative learning environments where AI provides personalized guidance, but educators focus on fostering creation and application skills. Adopt a phased rollout of AI education—starting with foundational exposure in schools, followed by deeper integration in higher education and vocational training.
Thought Provoking Comments
“Resumes will die by 2030. The only skill that will remain extremely important is design and creativity. The workforce of the future must be able to collaborate with machines, not compete with them, and continuously adapt without fear.”
This bold prediction directly challenges the conventional belief that existing professional credentials will stay relevant, shifting the focus from technical skills to uniquely human creative abilities.
It pivoted the conversation from a product‑centric demo to a broader societal debate about the future of work. Subsequent speakers (e.g., Piyush Nangru, Ashish Gupta, and the audience) expanded on the idea, discussing skill shelf‑life, the need for continuous learning, and the role of creativity as a differentiator.
Speaker: Speaker 2 (the bumblebee metaphor speaker)
“We can scan a QR code or upload a photo, and the AI agent (Jenny AI) automatically fills the safety observation form, even asking follow‑up questions when context is missing.”
Demonstrates a concrete, low‑friction workflow that removes manual form‑filling, especially for non‑technical or non‑English‑speaking workers, illustrating AI’s potential for inclusive safety reporting.
Set the technical foundation for the rest of the discussion, leading participants to explore multilingual voice input, ergonomic risk detection, and autonomous compliance – each building on this initial use‑case.
Speaker: Speaker 1 (demonstrator of the safety platform)
“The AI can listen to a worker’s description in Hindi, transcribe it, and populate the structured safety form without the worker needing to know the corporate terminology.”
Highlights AI’s ability to bridge language and literacy gaps, expanding accessibility beyond the demo’s visual input scenario.
Prompted the audience to consider broader inclusion challenges and inspired later remarks about AI democratizing education and training for rural or under‑served populations.
Speaker: Speaker 1
“RISC‑AI processes every record across programs to surface patterns, precursors and heat‑maps, enabling predictive insight into emerging risks.”
Introduces a macro‑level analytical layer that moves from isolated incident reporting to enterprise‑wide risk intelligence, adding strategic depth to the conversation.
Shifted the dialogue from operational automation to strategic foresight, leading participants to discuss how AI can inform preventive actions and policy decisions.
Speaker: Naveen GV
“Creativity, cognition and culture are the three pillars that define human capital; coding is now table‑stakes, what matters is how we apply it.”
Frames the debate in terms of enduring human attributes rather than specific technologies, reinforcing the earlier bumblebee claim while adding cultural nuance.
Reinforced the panel’s consensus that AI will not replace humans but will amplify these three pillars, prompting further discussion on multilingual contexts and regional diversity.
Speaker: Piyush Nangru
“AI is only as good as the data fed into it; poor data yields unreliable results. Continuous engagement is required, unlike a one‑off software install.”
Challenges the simplistic view of AI as a plug‑and‑play solution, emphasizing data quality, governance, and ongoing human oversight.
Tempered the earlier enthusiasm, leading to a more balanced view that highlighted the need for ethical frameworks and human‑in‑the‑loop governance.
Speaker: Speaker 4 (public administration perspective)
“The shelf‑life of hard skills is shrinking from decades to a few years; we must move from ‘learning’ to ‘making’ and applying knowledge.”
Provides a concrete metric that underscores the urgency of re‑thinking education and workforce development in the AI era.
Steered the conversation toward actionable educational reforms, prompting Ashish Gupta and others to discuss project‑based learning, AI‑enabled personalized assessment, and the need for rapid up‑skilling.
Speaker: Piyush Nangru (response to audience timeline question)
“AI can analyze a video of a manual material handling task and automatically flag ergonomic risks that only a certified ergonomist could detect.”
Extends the AI use‑case from safety compliance to health ergonomics, showing cross‑domain applicability and the potential to replace scarce specialist expertise.
Opened a new thread about AI augmenting specialist roles, leading to discussion on democratizing expert knowledge in remote or underserved sites.
Speaker: Speaker 1
“Education must shift from knowledge acquisition to cognition – the ability to create, apply, and solve problems – and AI should be the tool that enables this shift.”
Synthesizes the multiple strands of the discussion into a clear pedagogical vision, linking AI, creativity, and the future of learning.
Served as a concluding turning point that unified the technical demos, philosophical debates, and policy concerns into a single actionable narrative for the audience.
Speaker: Shweta Chaudhary (closing remarks)
“The government’s digital‑skilling portals and AI readiness programs are essential, but schools still lack the infrastructure (labs, AI curriculum) to make AI education effective.”
Brings a policy‑level perspective, identifying systemic gaps that could hinder the optimistic scenarios presented earlier.
Prompted a realistic discussion about implementation challenges, leading to suggestions about public‑private partnerships and the need for AI labs in schools.
Speaker: Ashish Gupta
Overall Assessment

The discussion began with a concrete product demonstration that showcased AI‑driven safety reporting. Early technical insights (photo/voice input, multilingual support) established a foundation for broader speculation. A pivotal moment arrived when Speaker 2 declared that resumes would become obsolete and that creativity would be the sole enduring skill, which reframed the dialogue from operational efficiency to existential questions about work, education, and human identity. Subsequent comments from Piyush, Ashish, and the public‑administration voice deepened this shift, introducing cultural, ethical, and policy dimensions. The introduction of RISC‑AI and the ergonomic video analysis expanded the scope from individual incidents to enterprise‑wide risk intelligence, while the audience’s timeline question forced the panel to confront the rapid erosion of hard‑skill relevance. Throughout, each thought‑provoking remark either opened a new thematic avenue (e.g., data quality, democratization of expertise, education reform) or reinforced the emerging consensus that AI will augment—not replace—human creativity, cognition, and culture. The final synthesis by Shweta Chaudhary tied these threads together, steering the conversation toward actionable educational and policy strategies. In sum, the identified comments acted as catalysts that repeatedly redirected the conversation, deepened its analytical layers, and ultimately shaped a narrative that balances AI’s transformative potential with the irreplaceable value of human ingenuity.

Follow-up Questions
When will AI surpass human intelligence? Is there a timeline for AI becoming better than humans?
Understanding the timeline helps stakeholders plan for workforce transitions, policy making, and educational curriculum adjustments.
Speaker: Audience (unidentified participant)
How can we effectively train the entire Indian population (approximately 140 crore people), including parents, young children, and non‑tech‑savvy individuals, to use AI responsibly?
Massive AI literacy is essential to avoid digital divide, ensure equitable access, and prevent misuse or mistrust of AI technologies.
Speaker: Audience (unidentified participant)
What systematic approach is needed to ensure government AI initiatives (e.g., the GEM marketplace) reach, are trusted by, and benefit under‑served and rural communities?
Effective rollout and trust-building are critical for inclusive adoption of AI services across diverse socio‑economic groups.
Speaker: Audience (unidentified participant)
How should AI education be integrated into school curricula, including the required infrastructure (AI labs) and teacher training?
Early AI education builds foundational skills, prepares future talent, and ensures responsible use from a young age.
Speaker: Ashish Gupta
What frameworks and guidelines are needed to ensure ethical and responsible use of AI, especially concerning privacy and generated content?
Ethical safeguards protect individuals’ rights, maintain public trust, and comply with emerging regulations.
Speaker: Ashish Gupta
How can we address widespread fear and anxiety about AI while preserving human originality and unique qualities?
Mitigating fear is necessary for smoother adoption and for leveraging human creativity as a competitive advantage.
Speaker: Speaker 4 (public administration representative)
In what ways can AI be used to quickly assess individual skill gaps and match unemployed or under‑employed populations with appropriate jobs?
Targeted AI‑driven skill mapping can reduce unemployment, improve social equity, and support economic growth.
Speaker: Speaker 3 (public policy perspective)
How does the current education system impact confidence levels of students from non‑urban or under‑privileged backgrounds, and how can AI‑enabled learning improve it?
Confidence influences learning outcomes; understanding AI’s role can help design interventions that boost self‑efficacy.
Speaker: Audience (unidentified participant)
How can the accuracy and contextual awareness of AI agents like ‘Jenny AI’ be improved when analyzing images that lack full situational information?
Better context handling reduces false positives/negatives, increasing trust and effectiveness of AI‑assisted safety reporting.
Speaker: Speaker 1 (demo presenter)
What are the best practices for implementing robust multilingual support (e.g., Hindi) in AI‑driven observation reporting tools?
Multilingual capability expands accessibility for diverse workforces, ensuring inclusive safety reporting.
Speaker: Speaker 1
How can AI‑generated corrective and preventive actions be aligned with the established hierarchy of controls and regulatory compliance frameworks?
Alignment ensures that AI recommendations are legally sound, practically feasible, and prioritize safety effectively.
Speaker: Speaker 1
What strategies are needed to scale ergonomics analysis (Ergo AI) across varied industrial settings and different types of manual tasks?
Scalable ergonomics AI can reduce musculoskeletal injuries industry‑wide, improving worker health and productivity.
Speaker: Speaker 1
How can risk‑trend visualization and predictive modeling (RISC‑AI) be validated and refined to provide reliable early‑warning signals?
Validated predictive intelligence enables proactive risk mitigation, potentially averting large‑scale incidents.
Speaker: Speaker 1
In what ways can AI act as a democratizing tool to empower rural entrepreneurs and solopreneurs in tier‑3 towns and villages?
Empowering rural innovators can bridge economic gaps, foster local entrepreneurship, and stimulate inclusive growth.
Speaker: Piyush Nangru
What models of partnership between academia, industry, and government can sustainably advance AI‑enabled education and skill development?
Collaborative ecosystems ensure resources, expertise, and policy align to deliver scalable, future‑ready education.
Speaker: Multiple panelists (e.g., Piyush Nangru, Ashish Gupta, Shweta Chaudhary)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building the AI-Ready Future From Infrastructure to Skills

Building the AI-Ready Future From Infrastructure to Skills

Session at a glanceSummary, keypoints, and speakers overview

Summary

Thomas Zacharia opened the session by congratulating the audience and introducing the theme of “building AI readiness from compute to capability” [1-2][6]. He warned against over-indexing AI on GPUs, noting that while GPUs are a core part of the infrastructure they represent only one layer of a broader AI stack that AMD supports from PCs to edge devices [7-11]. Zacharia also announced that he would discuss sovereign AI while his colleague Tim would cover the enterprise side [13-15].


He outlined the U.S. Department of Energy’s Genesis Initiative, which operates under the DOE’s $1 trillion annual R&D budget (20-30 % government-funded) and seeks to use AI to accelerate scientific discovery, energy research, and national-security missions [18-23][26-33]. The initiative frames research as a hypothesis-experiment-data-AI loop that aims to reduce cost and speed outcomes [34-38]. To implement this, the DOE is creating a public-private partnership called the American Science Cloud, run on an AMD MI355 cluster, and intends to federate compute and data across national labs with secure, cloud-enabled operations [46-49]. Zacharia emphasized that such a model requires composable standards, confidential computing, and governance by design to protect both commercial and national-security interests [47].


He argued that AI advancement depends on open ecosystems, governance with a human-in-the-loop, and strong talent pipelines, and that AMD is committed to open-source hardware and software standards to enable startups and innovators [61-68][70-73]. Paneer Selvam then highlighted India’s enthusiasm, noting 267 000 summit registrations and the need to improve the AI readiness quotient of SMEs through startup-driven implementation of sovereign AI across five infrastructure layers [106-112][113-114]. Timothy Robson reinforced the software focus, describing AMD’s multilingual LLM work for low-resource Indian languages, the open-source Primus tool, and “day-zero” support that allows models to run on AMD hardware without lock-in [126-138][152-161][204-211]. He also promoted the free AMD Developer Cloud, Docker containers, and partnerships with hyperscalers and “Neo” clouds to give startups rapid, cost-effective access to compute [187-195][176-178].


Gilles Garcia added that AI is moving to the far edge, requiring specialized, low-power accelerators and a full stack that can act locally, citing AMD-based humanoid Gene01 as a proof of concept for physical AI [230-240][242-244]. He warned that a one-size-fits-all GPU solution would overheat robots and that AMD’s portfolio offers tailored accelerators for industrial, medical, and autonomous applications [237-239][245]. Zacharia closed by urging participants to stay curious, balance high-performance GPUs with lightweight edge solutions, and leverage both startup and academic ecosystems to drive societal change through AI [247-250].


Keypoints


Major discussion points


Sovereign AI and public-private partnerships – Thomas Zacharia outlines the U.S. Department of Energy’s Genesis Initiative and the “American Science Cloud” built on an AMD MI355 cluster, emphasizing the need to federate compute and data across national labs, academia and industry while embedding security and governance by design [16-23][45-48][49-53].


Open ecosystem and software-first approach – Both speakers stress that AI success now hinges on open-source tools, standards and “day-zero” support (e.g., PyTorch, Triton, AMD’s Primus ecosystem) to avoid vendor lock-in and enable rapid adoption across hyperscalers, neo-clouds and startups [70-73][123-128][152-158][205-211].


Accelerating innovation through startups and talent – The METI Startup Hub and AMD highlight the critical role of startups as “AI natives” in driving readiness, providing compute resources (AMD Developer Cloud, Docker containers) and mentorship to move from proof-of-concept to production for SMEs and the broader Indian ecosystem [106-112][176-184][187-196].


Hardware scaling and exascale compute – Zacharia describes AMD’s exascale history, the Helios rack delivering 2.9 exaflops of AI compute at 220 kW, and the vision of scaling to “Zeta” and beyond, underscoring that future AI problems will be solvable as compute power continues to grow [85-94][92-95][96-99].


Governance, human-in-the-loop and responsible AI – The talk stresses that AI governance (distinct from regulation) must keep a person in the loop for validation, especially in scientific discovery, national security and other government-critical domains [61-66][62-65][78-81].


Overall purpose / goal


The discussion aims to convey a holistic “compute-to-capability” roadmap for building sovereign AI readiness, showcasing AMD’s hardware and software offerings, public-private initiatives (U.S. DOE, Indian METI), and the ecosystem needed-open standards, talent development, and startup involvement-to accelerate scientific discovery, national security, and economic growth.


Overall tone


The conversation is upbeat, collaborative and forward-looking. It begins with a formal congratulatory opening, shifts into an informative, technical briefing on initiatives and hardware, moves to a motivational call-to-action for startups and Indian partners, and concludes with an encouraging, curiosity-driven message. The tone remains consistently enthusiastic, with occasional shifts toward a more promotional (hardware showcase) and then a more advisory (governance, ecosystem) emphasis.


Speakers

Paneerselvam M – CEO of the METI Startup Hub at Ministry of Electronics and IT, Government of India; Dr. (title) [S1][S2][S3]


Gilles Garcia – French AI specialist focusing on physical AI for communications, robotics and industrial applications; AMD representative (implied) [S4][S5]


Timothy Robson – Hardware engineer turned software advocate; AMD representative (hardware/software focus) [S6]


Moderator – Session moderator (conference moderator) [S7][S8][S9]


Thomas Zacharia – Senior AMD executive discussing AI readiness; affiliated with AMD (role/title not explicitly stated in the transcript)


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Thomas Zacharia opened the session by congratulating the audience and noting that AMD employs roughly 30 000 people worldwide, including 10 000 in India, before framing the central theme of “building AI readiness from compute to capability” [1-3][6]. He warned that current discourse tends to over-index AI on GPUs, even though GPUs represent only one layer of a much broader AI stack that spans from personal computers to core infrastructure and out to the edge [7-11]. Zacharia then announced that he would address the sovereign side of AI while his colleague Tim would cover the enterprise side, establishing a clear division of topics [13-15].


The first substantive topic was the U.S. Department of Energy’s (DOE) Genesis Initiative, a public-private partnership aimed at accelerating scientific discovery, energy research and national-security missions with AI [16-23]. Zacharia highlighted that the DOE operates large-scale light-source and neutron-source facilities and that its three national labs-Los Alamos, Livermore and Sandia-certify the nuclear arsenal each year for the President, providing the security-national-policy backdrop for the initiative [30-33]. He noted that the Genesis program was launched by the Trump administration and that its first public announcement was made jointly by DOE Secretary Wright and AMD CEO Lisa Su [45-46]. Zacharia also emphasized that the United States is seeking partners in Japan, Europe, the UK and elsewhere to make the effort truly international [47-48].


To operationalise Genesis, the DOE is establishing the “American Science Cloud”, a federated, cloud-enabled platform that will run on an AMD MI355 cluster and will federate compute and data across national labs, academia and industry while embedding security, confidential computing and composable standards by design [45-49][46-48][47]. Zacharia stressed that the model must support secure public-private collaboration and avoid vendor lock-in [47].


Zacharia then traced AMD’s three-decade journey in high-performance computing, citing four consecutive TOP500 systems that were first-of-their-kind [50-52]. He recalled the early large-scale GPU deployment of 30 000 NVIDIA GPUs when “CUDA” was still a four-letter word and credited U.S. government risk-taking for enabling those pioneering TOP500 machines [55-57]. He described the Helios rack-a single 72-GPU chassis delivering 2.9 exaflops of FP4 AI compute at 220 kW-as a showcase of exascale performance with power efficiency [85-94][92-94]. Looking ahead, Zacharia projected a scaling path from the current “Yara” level to “Zeta” (≈300 racks) and ultimately a ten-thousand-fold increase in compute over the next decade [94-99].


A recurring theme was responsible AI governance. Zacharia distinguished governance from regulation, defining it as the inclusion of a human-in-the-loop to validate AI outputs before they are acted upon, especially for agentic systems that drive scientific and national-security innovation [61-66][62-65]. He illustrated this with an “inner autonomous loop” where thousands of AI agents execute hypothesis-driven experiments, but no result is committed until a human reviewer approves it [78-81].


All speakers underscored the importance of an open, software-first ecosystem. Zacharia highlighted AMD’s commitment to open-source hardware and standards, enabling innovators to avoid vendor lock-in and to build on open platforms [70-73]. Tim expanded on this by describing AMD’s Primus open-source stack, the “day-zero” model support program (e.g., DeepSeek and Baidu’s Paddle models) that lets new models run on AMD hardware out of the box [152-158][210-212], and noting that tools such as PyTorch, JAX and Triton enable developers to write hardware-agnostic code [152-158]. He also stated that AMD GPUs and systems are present in every hyperscaler worldwide [150-152].


Paneer Selvam of India’s METI Startup Hub presented India’s sovereign AI model built on five infrastructure layers, aimed at raising the AI-readiness quotient of SMEs and democratising AI across society [106-112][113-114]. He quoted Prime Minister Modi saying AI is still in its early stages, reinforcing the call to stay curious [62-63]. Selvam also noted the broader legacy of the Manhattan Project, which includes both destructive nuclear weapons and beneficial outcomes such as nuclear medicine, naval propulsion and civilian energy [62-63].


Tim detailed concrete support mechanisms for start-ups: the AMD Developer Cloud offering 50-100 free compute hours, pre-packaged Docker containers that bundle the full software stack, and accelerator programmes that guide start-ups from proof-of-concept to production [187-196][197-200].


Gilles Garcia turned to edge and physical AI, arguing that AI is moving to the far edge-robots, autonomous vehicles and industrial plants-where low-power, purpose-built accelerators are required for real-time decisions without cloud latency [230-236][238-244]. He warned that a generic GPU would overheat in such scenarios and that specialised accelerators are essential for reliable on-device inference. Zacharia concurred, noting that AMD’s portfolio includes lightweight, low-power edge offerings alongside its high-performance GPUs, reinforcing the need for a fit-for-purpose hardware mix [71-73][78-80]. Garcia illustrated the potential with the Gene01 humanoid built on AMD technology [240-242].


Tim also highlighted multilingual large language models (LLMs) for low-resource languages. He described a collaboration with Finland’s Lumi supercomputer, where AMD technology was used to adapt existing LLMs to Finnish-a language spoken by fewer than five million people-and extended this approach to India’s 22 official languages [134-143][138]. He cited AMD’s involvement in training the 176-billion-parameter Bloom model and the open-source Primus stack as enablers for building inclusive LLMs [141-143][155-158].


In closing, Zacharia urged participants to remain curious, to balance the pursuit of ever-larger centralized compute clusters with the development of lightweight edge solutions, and to leverage both start-up ingenuity and academic research to drive societal transformation through AI [247-250]. The overarching message was that AI readiness is a holistic endeavour that requires end-to-end compute, data, open software, responsible governance and a vibrant ecosystem of talent and start-ups.


Session transcriptComplete transcript of the session
Thomas Zacharia

So congratulations to all of you. You should be proud. And I just want to say that on behalf of the 30 ,000 AMDers worldwide, and particularly 10 ,000 in India, I just want to congratulate you and thank you for this opportunity to have this discussion. Since we are a small group, I think we’ll keep it informal. And I want to make sure that somebody please keep track of time so that I do justice to my colleagues here and the dais. The topic that I’ve been asked to talk about is sort of building AI readiness from compute to capability. In AI, in the field of AI these days, there seems to be an over -indexing of AI and GPUs. When in reality, AI is much broader.

GPU is obviously a significant part. It’s a part of the core infrastructure. But what we do at AMD is to really provide a full suite of AI capability from AI on AI PCs to core infrastructure to all the way out to the edge. And I have my colleague Tim from AMD, so we decided that we’re going to tag team. And so I’m going to focus perhaps a little bit on the sovereign side, and then Tim can focus on the enterprise side. That’s okay with you. So let’s just talk about sovereign AI in practice and exploring the motivators. So this particular slide is something that is created by the Department of Energy in the United States as part of a new initiative that was kicked off by the Trump administration called the Genesis Initiative.

and I had a role to play in terms of trying to support and address in crafting this initiative and the framing is very simple. If you look at the top line, I don’t know whether this has a pointer, it’s okay. Okay, so the top line, the white line is funding in the United States for R &D. Today, the United States spends about a trillion dollars a year in R &D. That’s just my involvement. Not all of that is government spending. It’s roughly about, say, 20 to 30 % U .S. government and the rest industry. The bottom line is what we consider research. Output efficiencies. So the problems are getting harder. it is getting more challenging and even though we are trying to tackle really important problems the sense is that throwing money is not having the same rate of return and this slide basically asks the question how do we reduce the gap and at least the thesis for the Genesis mission is that you can use AI as a way to accelerate scientific discovery the Genesis mission has three areas of importance for people who don’t know about the US Department of Energy the US Department of Energy is the nation’s largest physical science agency so it has it operates through 17 national labs and some of the earliest ones, like the one Oak Ridge National Laboratory, which I used to lead before joining AMD, came to being during the Manhattan Project.

And Manhattan Project, about 65 % of the entire funding of Manhattan Project was at Oak Ridge National Laboratory. And in addition to, you know, in fact, I think the Prime Minister mentioned that, about nuclear energy, both the destructive aspect as well as the significant outcomes that came out of that from nuclear medicine to nuclear navy to nuclear energy. These all came, you can trace back to Manhattan Project. So U .S. Department of Energy is not only responsible for energy, but it’s really a science organization. It’s got three priorities. One is just discovery science. The second is energy. And the third is national security. and national security. America has a really interesting thing, a way of keeping the nuclear arsenal away from the military in the sense that it is the U .S.

Department of Energy and not U .S. Department of Defense or Department of War that is responsible for the nuclear arsenal. And the three lab directors, Los Alamos, Livermore, and Sandia, has to certify each year that the arsenal is ready for the President of the United States. So this is a piece of the hypothesis. If you think about research, you can look at the left side. It starts with hypothesis, then you conduct experiments, get the data. And today, you take the data, use AI, machine learning, et cetera, you get analysis. What you’re trying to do is to make this much faster so that you can have science outcomes coming out. That’s it. do it in a reduced cost because you cannot throw more and more money at this problem and enhance global collaboration.

I think there is a genuine interest on the part of the U .S. that this whole premise is not just a U .S. issue. And so I think that it’s likely announcements that suggest that countries like Japan and Europe and UK and others may be part of this overall approach to drive sovereign AI for those aspects of AI deployment and scaling that is uniquely a government or state function. So as I mentioned broadly, scientific discovery, energy and national security. But if you look at the scientific discovery further to the next step, then you will see healthcare, education, skilling, all these things. Fundamentally, a government function. And this is not an easy task because if you think about how these institutions’ research is done, I mentioned large fraction of it in the private sector, a lot of it is done in academia funded by government, and then of course in national labs in the United States, India has its own set of national labs, academia, etc.

So what you need to do is take a look to see how do you integrate all this data? At least the U .S. Department of Energy operates these large, multi -billion dollar light sources, neutron sources, specialized scientific experiments. You need to be able to incorporate all these things, so you have to federate the compute and data, you have to have cloud -enabled lab operations, which is not how things are done today. Security and governance by design, especially when you’re thinking about public -private partnerships, even in the enterprise commercial scale, you want to make sure that you have secure computing, you have confidential computing, you can maintain integrity, but also if you think about national security, you have additional layer, and then you want composable standards versus infrastructure.

So this particular program was kicked out by Secretary Wright, well the President of the United States, then Secretary Wright, and the last fourth quarter of last year, and the first announcement was done with Lisa Su, our CEO, because one of the things that they wanted to do was a unique public -private partnership, and so the core infrastructure, which is currently called American Science Cloud, this program is just being stood up, is going to be run on an MI355 cluster, which is what this entire program that is aimed at driving innovation is going to be run on. And so we are really excited to be a part of this. initially US and soon an international effort to drive innovation in those areas that are uniquely government function.

I’ve had a ringside seat in computing for the last 30 years and been responsible for a lot of supercomputing deployment, a dozen or so. The last four or five of them were number one systems in the top 500, each first of a kind. This is another important thing. Innovation, if you think about AI, AI didn’t happen magically with NVIDIA or AMD. It happened because US government took the risk to invest in first of a kind systems. So we were the first to deploy 30 ,000 NVIDIA GPUs when people thought that CUDA was a four -letter word. Now everybody thinks that this is this amazing software, but change comes hard to people. And so I just want… I want you to know that…

particularly all of you who are youngsters things are going to evolve. If you think that AI is just like the Prime Minister said, it’s just the early stages. So you have to be open and you have to be part of this drive for effective, scalable and impactful AI. Then deep learning came where this mixed precision computation then generative AI and last year was really authentic AI and some of us think that this year we’re going to focus increasingly on governance. Governance does not mean regulation there is a role for regulation by governments but governance is that how do you want if you want to have agentic systems driving, accelerating innovation you want to make sure that the output has a person in the loop.

The one way to simply think about it is that if you are researchers here, if you have a professor who’s got a dozen students who are doing research you don’t let the students just go publish things. There is the professor’s responsibility, there is the peer review committee, etc. So you want that human in the loop before you can update and let this thing to drive innovation while it also allows it to do things that AI does best. So this is how we think about compute to capability, a model of national AI readiness. We want its rest on talent, talent and readiness of talent, giving people access to compute and models. Research enablement is key because you want people to operate AI in an environment where you’re questioning things and innovating all the time as opposed to assuming that what we in the industry is providing you is the only solution.

So I think… If you look at countries that are leading in AI, there is a very strong R &D and innovation foundation that is allowing you to lead because there are people who are questioning every time somebody says something. to make sure that it is validated, it’s continuing to innovate. Start up an innovation lab because you want to take these ideas and start new companies because many of these new innovation and new technologies may be led by people with new ideas and opportunities and of course ultimately enterprise and public sector adoption. We strongly believe, and again I heard Prime Minister Modi say, open ecosystem and open source, open platforms. These things, if you think about iOS and Android, India I find has a lot of penetration of Android systems because inherently open systems allows you to innovate without getting locked into vendors.

And so we at AMD have a commitment to make both our hardware infrastructure and our software infrastructure to be based on open standards so that you can innovate. Around this, any part of this infrastructure. can be part of a new startup or new company adding to that. That is also an important way for India to become part of the supply chain and the semiconductor ecosystem, because you don’t have to start with an attempt to go in for two or three nanometers. You can actually do amazing work and be part of leading -edge technology at different form factors. So I mentioned a little bit about how we think about agendic flows and AI scan work. This is simply the way you think about it.

The inner loop is an autonomous loop where AI and agendic AI does things, what it can do fast, it can operate. If you have 100 ,000 GPUs, you have 100 ,000 agents tackling this problem and it can actually go through the hypothesis -driven experiments and systems. So you can do simulation, campaign scale coordination, machine speed execution, etc. But we do not allow it to update. the outcome until a human in the loop has had the opportunity to validate to make sure that we don’t have unintended consequences. Now, how do you build this thing? So this is, if you haven’t gone to the AMD booth, I would encourage you to do. This is my only plug in this presentation. We spent a ton of money to bring this Helios rack here just so that you can have a sense of what is, not what this particular rack can do, but giving a glimpse of what is possible the next year and the year after.

So we, in 2007, myself and two of my colleagues started what is called the Exascale program. And the challenge was to deliver an Exascale system for under 20 megawatts. Because if you had just scaled the capability in 2007, it would have taken three to four gigawatts. And we knew that the government was not going to… sign, $4 billion for power, just electricity alone to run the computer. So we were motivated to drive that. And we delivered that first exascale system, Valfrontier at Utrich, for less than 20 megawatts. Everybody thought it was crazy, it cannot be done, but there are some things, when you put audacious goals, people rally around and then deliver. This particular rack, in one rack, there are 72 GPUs that will deliver 2 .9 exaflops of AI compute, which is FP4, not FP64, just to be very clear.

But for AI capability, you get 2 .9 exaflops of compute capability for 220 kilowatts. Right? That, even for somebody who’s been in this field for a long time, it’s just mind -blowing. this is where we are headed AI is the fastest adoption of any technology that humanity has introduced we’ve gone from 1 million active users to 1 billion in a matter of just a couple of years and we are headed to 5 billion users so there is a lot of opportunity to innovate in this field and all of us are going to continue to create these opportunities as Lisa said, we are entering the Yara scale so already people are thinking about the next 1000 so let me just say you can get to Zeta scale by just taking 300 of those racks and putting together and then it’s another 3x so I would say in the next 10 years maybe we would be at this 10 ,000 factor so the kind of problems that you are thinking about should not be constrained by what you can do today by the time you figure out the solution for an important problem compute will be there.

That is what we in the industry like to promise you. And I think advancing national economies, these are one of the things that people might you would be forgiven if you thought that does AMD do these things and how prevalent are our compute capabilities. I think Tim is going to tell you that our GPUs and our systems are in every hyperscaler globally and when it comes to HPC and national priority missions, AMD is the leader. If you listen to President Macron, he referenced Alice Recop, which is the first AI factory that the French government announced, the CEA announced, which is based on AMD MI430X, which is a variant of the MI450 on the right that you see outside.

I will close by saying that a shared path forward is really what we are looking for. I know India is in the early stages and we are really delighted to actually have this conversation. Thank you very much.

Moderator

I’d like to invite our next speaker, Paneerselvam M, CEO of the METI Startup Hub at Ministry of Electronics and IT, Government of India. Dr. Paneerselvam M is a distinguished leader with over two decades of expertise in innovation, management, strategic growth and market development. He’s been instrumental in advancing India’s startup ecosystem and fostering impactful partnerships between the government, industry and entrepreneurs. In his

Paneerselvam M

drawing insights out of this data and then comes the interface layer where most of it is going to be really driven by agents, by agentic AI and of course as Thomas mentioned there is always going to be a human in the loop perspective but as we progress this is going to change as well. So you know the two fundamental things that I want to share, one is the entire transformation in the readiness space for AI is an opportunity for you know certain change and intent needs to be very very clear and then comes the curiosity to learn about this little bit more for each business owner and then comes the implementation part of it and then start -ups have a very very critical role to facilitate this because you are coming in almost as AI natives, working with an understanding of this and you can really go out and demonstrate value.

and help implement the entire readiness, improve the readiness quotient or improve the readiness quotient for small and medium enterprises and ensure that, you know, this is a broad -based growth opportunity for businesses across the country and not limited to just a few of them, right? So there is huge potential and I think enough has been spoken. The summit itself is a proof of the kind of curiosity. We have had 267 ,000, you know, registrations, people who have registered in the last five days. Unexpected, overwhelming response to some extent that we couldn’t really handle it, right? At the same time, it gives us immense pride and excitement for the amount of curiosity and excitement for the amount of curiosity and excitement for the amount of curiosity and the youngsters in India, across India.

travel here from the length and breadth of country to understand what is AI going to be and how this is going to impact and what the opportunities are and that is itself is a fantastic starting point and and as I said you you know there’s a lot of happens this is in Indian sovereign models coming the tech the five layers the infrastructure the design you know all the layers are being worked upon in the Indian context and we are ready as a nation we are ready as a government to facilitate this this real disruptive transformative technology but the truth is it’s you know it’s also important for it has to populate into all the layers of the society beat just not just limit itself to large and large corporates but also to to small and medium enterprises and of course it is already populated with the large and medium enterprises and of course it is already populated well into the d2c to the individual users and it’s much much beyond the beyond the chat, GBTs of the world.

So with that, I think I once again take the opportunity to thank the entire team from AMD and we have had some interesting conversation and I look forward for the continued partnership with AMD and METI Startup Hub because in our perspective, corporates have a huge role to play in the success of the startups. Thank you.

Timothy Robson

Thank you. There’s a couple of things that I want you guys to think about as I go through my talk. 30th of November, 2022. The world changed. ChatGPT was launched. And I’m willing to bet that everyone in this room, myself included, what we thought we knew about AI changed. Two years ago, what we thought even after ChatGPT changed. A year ago, what we thought changed. And what I’m hoping is as you leave here, other things would have changed in the last 45 minutes of listening to these talks. Okay, so I’m going to skip through the reason why we need to go through and need compute. But I think one thing that is very, very, very important is things are moving so fast.

And things are moving in a way that we cannot predict that the only way that anybody is going to be successful is an open ecosystem. And, you know, both of these themes. And I think the other gentleman before me have alluded to this as well. and I’m going to take you through specifically around software. I mean, everything to do with AI really, I’m a hardware guy, I used to design chips, but everything today is software, right? And I was talking to one of my colleagues and I said, okay, so I’m going to India, I’m going to do all this, we’re going to go through. And I said, is it really going to be the, you know, are they going to understand it?

And he said, Tim, India is software. This is what we do. He said, you’re going to be in front of the best people in the world that are going to understand what you want to talk about. So I’m really going to focus on the software side of that. And one of the things that I wanted to do, understanding that we had our esteemed colleague from MITEI here, is we do have lots and lots of experience in this space. And one of the things that I want to highlight is some work that we did with Lumi in Finland. Now, why is this important? So within Europe, almost all the languages are Indo -European, right? If you know a little bit of Greek, if you know a little bit of Latin, if you know a little bit of one of the languages, there’s 27 countries in Europe.

so let’s call it 27 languages and then you have Finland Finland is a Uralic language nothing to do with any other language in Europe absolutely different construct, different base different absolutely everything and so what we found working with the guys in Finland is they were coming to us because they put in this Lumi supercomputer and they said okay so we have a small country in Europe, 5 million native speakers and we have to take all of this work that’s been done English, big codex Spanish, big codex, Hindi, English big codex of all of that to do your training, suddenly you have a language of 5 million people how do you get that language into your LLM model so that it becomes useful now I’m probably going to get the pronunciation really really wrong here okay but I did actually use chatGPT to look at the 22 Indian languages right so if we look at Bodo, Konkani, Dogri, Sindhi, Nepali there’s less than 5 million people that speak those languages so how do you get an Indian LLM that caters for everybody as we’ve seen from President Modi AI for all of that kind of thing and this is the kind of area that with Mighty this is where we would like to work with you guys and be able to bring some benefit of the work that we’ve been able to do now remember the first day 30th of November 2022 this machine was inaugurated so it was put together all of the systems were put together it was all brought up the chips were made years before this machine was inaugurated my birthday 13th of June 2022 6 months before ChatGPT came out so this machine with 12 ,000 GPUs that had the foresight from the Finnish government was using AMD technology to run AI before ChatGPT came out.

So a lot of people that think that a lot of the stuff from AI has come from a specific area. This again, think of our way of thinking. We were there and we have the ability. We actually did the Bloom 176 billion parameter model. It was an open model made for European languages. So again, we would love to bring this knowledge and use with the Indian ecosystem to make this successful for everybody. I’m not going to spend a lot of time on hyperscalers. They’re obviously an important part of the market. It’s where a lot of the capabilities go into. We’re there. We have tens of thousands of GPUs. We actually have, as Thomas mentioned, we have the Helios system coming here.

Please go and take a look at it. If you like Harvard, it’s an interesting piece of kit. But really the idea here is whether you’re in a hyperscaler… or whether you’re in any other area, there is an ability to have a wider ecosystem. And again, inference, so AMD specifically, it’s not really an AMD pitch, but there was an idea in the market that AMD was inference only. That dates from Q1 2024. That’s two years old. So again, we have to kind of change that thinking, right? That’s older thinking. We actually now, again, completely open source. There’s a Primus ecosystem or a Primus tool for the open source, which enables you to be able to do all of the training that you need to do for all of your Indic languages or for your use cases, which again is completely open.

Enterprise AI. This one I think is an interesting one. I know when I started going out to customers and going out to enterprise customers, the difference in customer knowledge on what AI was, was amazing. You go into one customer and they say, okay, so this is our use case and we’re seeing these kinds of sizes of matrices, so we’re doing these optimizations. And then you go into another customer and you say, what are you doing around AI? And the guy goes, oh yeah, we’re doing Gen AI. Okay, great, yeah, what are you doing with Gen AI? We’re using LLMs. Okay, great, so using LLMs, what do you think? LLMs. And they had no idea, right?

It’s just, we have to do something with AI. And that has changed over the last 18 months and chatbot was something that most people said, okay, that makes sense, I understand chatbot, we can fine tune the model, we can do an internal AI system within the company. And now we’re starting to see with the agentic workflows this entire plethora of different use cases coming through. And so how then do you take it from a research institute or people that actually get onto your accelerator, whether that’s a GPU or a TPU or an FPGA or whatever else? and get it to a stage where actually people within a corporation can use it. And so this is something that has been understood.

And again, no lock -in, open, everything here is something that can be used without having to tie you into one particular area. And actually, I’ll come on to it a little bit later as well. It’s also something that I’ve been very impressed with, with the infrastructure that MITEI have put into place. In this case, with the public -private partnership, you have GPUs, you have TPUs, you have Inferentia, you have all of the different types of accelerators available to you within the Indian ecosystem that MITEI have made available to you. I’ll come on to that a little bit later more. But again, the idea here is that whatever the ecosystem is, or whatever the compute that you’re using, you’re able to go from an area where, whether it’s in the cloud or whether it’s non -prem, you have an ability to be able to give your employees within your enterprise an ability to be able to use that AI assistant or tool.

Neo clouds so these are the kind of what we call the smaller clouds, you know, they’re not the hyperscalers they’re a little bit more nimble they are a little bit more available to doing things a little bit different a lot of these guys are doing sort of bare metal and managed Kubernetes services, but it is coming to areas where they’re becoming like APIs, token factories there’s an ability for these guys to be able to provide you with compute quickly easily and at reasonable pricing to enable you in whatever it is you’re trying to do we find these are the first movers in the market and again in the same way that we’re integrated and working with the hyperscalers, we have these relationships with the Neo clouds and actually we’re working with quite a few of the guys here in India as well to make that available for you as well, so the whole idea again here is there is that compute that’s available please go out and understand the benefits or the trade -offs between the different types of Kubernetes services that you have out there and get the right solution for you guys.

Now, I’m assuming that most people here are going to be startups. And again, startup is an interesting area, right? So you have a startup, you know what you want to do, you absolutely are laser focused on getting your MVP out there, getting in front of customers, how do you generate some value, how do you generate some revenue? Although that these days is less and less important, it seems, as people get funding even sometimes before a product. But one of the things that you guys have to be sure of is that the compute that you have and the capabilities that you have are capable for the products that you actually have to then go and put into position.

And so this is an area where we understand that proof of concept, it’s very important. And again, I was chatting with the CEO of Mighty here before, it’s something he was saying, you know, POC to PO. You know, you have to be able to make sure that you understand the technology and how you can take that to market before you can actually go and invest. So we have a couple of different ways that we can help here within the ecosystem. You could actually go on there right now, there’s the AMD Developer Cloud. You can get, I think it’s 50 or 100 hours of free compute. You want to go on, how does AMD work, you know. It’s always going to be dependent on use case and what you’re trying to do.

But there is a huge TCO advantage, which of course is important for startups. Get onto the Dev Cloud, get it working. We actually provide Docker containers, so that’s everything put into a single Docker. So you can download a Docker and run it, so you don’t have to spend your time and your energy installing all of the software, putting everything together, get everything working. We’ve done all of that for you. Take the Docker down, get your model off of Hugging Face, get your weights off of Hugging Face. Use your own model and do something else. Whatever there is that’s in there, in the open source ecosystem is there and it’s going to work. Give it a go.

Give it a play. And then of course from that we can… can take you into our accelerator cloud a little bit more sort of hands -on, making sure we understand what you’re doing, helping, guiding, and assisting you in moving yourself forward there. And then, of course, we have the relationships in with the industry, you know, try and buys, being able to get you access to the computer, being able to get you the right solution at the right kind of price. So this is something also that I really want to highlight. So day zero support of models. Now, we announced this. So Quen3 Codex came out last week, day zero support on AMD. Baidu came out with one of their paddle models this week, day zero support on AMD.

What does day zero support mean? Well, it means that it’s not the first time we’ve seen this code. It runs on AMD. It’s guaranteed. It’s optimized. you know a lot of people think that to run something in AI you need a specific GPU the whole point is with day zero support absolutely false right again with Lumi pre -chat GPT in 2022 we were building LLMs for effectively an Indic type language languages right and so the ability is there if there’s a new model coming out you want to run it you want to test that you want to see how it works for you guys then that is there and runs out of the box and you know again if we look at this line in the middle you know PyTorch if you look at the history of PyTorch you know there were lots of signatories on PyTorch to make sure that was available for everybody AMD was one of them this mainly comes out of Microsoft and Meta who did not want to be closed in to a single supplier so actually what you’re doing with PyTorch is you’re writing Python code right you’re not writing vendor specific code it’s an open ecosystem that’s the whole point right you don’t want to be tied in you know it’s gonna slice for innovation it’s going to increase So PyTorch came out and that is the basis of 99 % of all of the customers I talk to, right?

They’re all writing Python under PyTorch. JAX is then coming forward. Triton, this is a Python -like language which is specific for gem optimization. Again, if you’re getting to that area where you’re actually seeing the gem sizes that are coming through from your operations and want to do gem -level operations, then Triton enables you to do that at the compiler level. So then you can be completely agnostic of what the underlying hardware is. The ecosystem and the underlying compute becomes kind of abstracted away because Triton enables you to run on anybody. It’s just a compiler for the new architecture. If we look at these models on the bottom here, President Modi this week has announced the first 12 Indian languages.

I can’t wait to get you guys here. right, fully supported day zero support, you know just to give you an example here, DeepSeek of course when DeepSeek came out, they did some things a little bit special multi -head latent attention was new we had day zero support with DeepSeek why? Because we’re one of the main contributors to SG9 there’s not a tie -in to an inference engine here, it’s an open ecosystem so we were able to come out of the door with better TCO, better performance, better cost and full support through SG9 on that DeepSeek model which was you know, the leader of its time because of our complete commitment to the open ecosystem just to give you an idea, again you’re walking out of here in 45 minutes with changed ideas, this is what we’re going for here I did have two minutes I now have five, I don’t know who bought me extra time but I owe you a beer Okay, so really actually that’s kind of the end of the pitch here.

One thing I would say is we do have a booth here at 5 .10. I’m sorry, I’m going to do a little bit of an AMD plug at the end here. But do come by and see us. You know, we actually have some of the neoclads there. We have some model creators, vendors, some ecosystem partners there. You know, come see, come change your mind. Come see what’s available within an ecosystem with the compute that’s available for you guys. Okay, thank you.

Gilles Garcia

So first of all I’m Gilles Garcia I’m French so we can talk about LLMs for French language if you want so I’m French, I’m based in France but I’m covering worldwide and I’m focusing on physical AI for the communications and robotics and industrial so we have been talking a lot about AI and most of the people are thinking AI means GPUs, big cloud and what we are seeing is a big shift, that’s another change that we are seeing, change management, so I’m the change management first but changing is we are seeing the AI moving into the edge and moving into the far edge which is industrial, robotics vehicles as well as the networks and so for that you need to have different type of beast, GPUs is one aspect of it but you need to have very profound different technology that AMD has as part of the bread portfolio that we have, these technologies need to be able to send to the market and we need to have a lot of that are able to send the data to the market and we need to have a lot of and we need to have a lot of that are able to send the data to the market and we need to have a lot of that are able to send the data to the market that are able to send the data to the market act, react in a so quick manner that there is no time to go back to the cloud for that.

And so these technologies need to be, of course, that will be inference, but need to be able to take decisions and act very safely, reliability, reliable, without having to rely on the cloud. And so that’s a new change that we’re seeing at AMD on the physical AI, which will become very, very important for us, is how do we take what we have learned in the cloud, and how do we make it available in the physical AI? Software is a big thing. Full stack, meaning hardware and software being able to deliver to the customer solutions, is what AMD is aiming for. And so our CEO, Lisa Su, was saying, it’s AI anywhere. And one size does not fit all.

Meaning that if you want to address a robot you can put a GPU into it, it will burn to hell. So you need to have a very dedicated accelerator with a high software stack, open source, that will be able to have this robot perceiving, visualize, act, touch and be able to act accordingly to what his purpose will be. At CES early January, Lisa Su brings on stage Gene01, which was the first humanoid built on AMD technology. That’s just impressive. Everything has been done by a startup in Italy to make this humanoid being able to sense, visualize, touch when somebody is touching it and when it’s touching something to act and react very rapidly without having to rely on the centralized source.

So I will not be longer than that. Physical AI is probably something that India, by the way, will have a lot of things to act into. Because GPUs are there already where physical AI is something that you will have to create. A lot of things related to medical, related to autonomous networks, autonomous cars, autonomous plants, industrial, and that’s where I think India will start, with all the startups and capability to use accelerators that are much smaller than what GPUs are, and this is all available today in the EMD portfolio. So I will stop here, encourage you to come to the EMD booth, and we can continue the discussion. Thank you.

Thomas Zacharia

Well, so we gave you a lot of information on AI, gave you four different accents, I think the French guy probably carries today. But my one message is that stay curious, as all of us have said, things are going to change and continue to change at a rapid pace. And, you know, people talk about so many thousands of GPUs, that will not be the main thing. and I think that’s something that we need to because you will find that we there’s a whole lot of interest in trying to provide you with even more powerful GPUs for their infrastructure while at the same time provides you very lightweight low power at the edge and so I think stay curious look from the from a start -up community point of view for a research point of view but academic point of view look for really interesting problems challenges to deliver the infrastructure that you need because ultimately this applications with where it is going to change society and life that’s all thank you very much Thank you.

Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (14)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Thomas Zacharia warned that current discourse tends to over‑index AI on GPUs, noting GPUs are only one layer of a broader AI stack.”

The knowledge base records Zacharia emphasizing that AI extends far beyond just GPUs, confirming his warning about GPU-centric views [S3] and [S12].

Confirmedmedium

“Zacharia framed the central theme as “building AI readiness from compute to capability”.”

Sources describe Zacharia distinguishing between compute and capability and discussing building AI readiness, confirming this framing [S12] and [S3].

Additional Contextlow

“Zacharia opened the session by emphasizing AI readiness and capabilities.”

The knowledge base notes the discussion took place at an AI summit in India with AMD and the METI Startup Hub, adding context to the AI-readiness theme [S3].

External Sources (76)
S1
Building the AI-Ready Future From Infrastructure to Skills — -Paneerselvam M- CEO of the METI Startup Hub at Ministry of Electronics and IT, Government of India; distinguished leade…
S2
https://app.faicon.ai/ai-impact-summit-2026/building-the-ai-ready-future-from-infrastructure-to-skills — I’d like to invite our next speaker, Paneerselvam M, CEO of the METI Startup Hub at Ministry of Electronics and IT, Gove…
S3
Building the AI-Ready Future From Infrastructure to Skills — The moderator introduces Dr. Paneerselvam M by highlighting his qualifications and contributions to India’s startup ecos…
S4
Building the AI-Ready Future From Infrastructure to Skills — 624 words | 177 words per minute | Duration: 211 secondss So first of all I’m Gilles Garcia I’m French so we can talk a…
S5
Building the AI-Ready Future From Infrastructure to Skills — – Thomas Zacharia- Gilles Garcia
S6
Building the AI-Ready Future From Infrastructure to Skills — Timothy Robson, a hardware engineer who transitioned to software, reinforced the importance of vendor-agnostic developme…
S7
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S8
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S9
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S10
The Global Power Shift India’s Rise in AI &amp; Semiconductors — – Thomas Zacharia- Rahul Garg – Vivek Kumar Singh- Thomas Zacharia
S11
Building the AI-Ready Future From Infrastructure to Skills — – Thomas Zacharia- Paneerselvam M – Thomas Zacharia- Timothy Robson- Paneerselvam M
S12
The Global Power Shift India’s Rise in AI & Semiconductors — So I think this is a great area for public -private partnership, in my view. The public part of it is a uniquely governm…
S13
Safe and Responsible AI at Scale Practical Pathways — Guardrails, Human‑in‑the‑Loop, and Risk‑Assessment Mechanisms Are Essential for Reliable Deployment
S14
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biolog…
S15
The Foundation of AI Democratizing Compute Data Infrastructure — The emphasis on community participation, data sovereignty, and alternative technical architectures suggests AI developme…
S16
AI as critical infrastructure for continuity in public services — Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational an…
S17
Agentic AI in Focus Opportunities Risks and Governance — This discussion at the AI Impact Summit focused on the business applications and policy implications of agentic AI, feat…
S18
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — He argues that AI should augment clinicians while keeping humans central to decision‑making, acknowledging the difficult…
S19
The fading of human agency in automated systems — This gap between language and reality matters, especially in governance contexts where assurances of human oversight are…
S20
Global Perspectives on Openness and Trust in AI — I don’t know, is really the answer. Governance is such a broad word. There’s a lot of, for example, open source is reall…
S21
Indias AI Leap Policy to Practice with AIP2 — Thank you, Access Partnership for having me this afternoon for this conversation. I think it’s an important element. How…
S22
Indias AI Leap Policy to Practice with AIP2 — Just prior to this I was having a conversation with a large corporate and how they can actually use startups as a cataly…
S23
Driving Indias AI Future Growth Innovation and Impact — Evidence:Startups can apply for GPU infrastructure at subsidized rates, with some receiving 100% of required GPUs. There…
S24
The Foundation of AI Democratizing Compute Data Infrastructure — Compute intensity, model scaling and hardware outlook
S25
From Technical Safety to Societal Impact Rethinking AI Governanc — Explanation:Both speakers support government involvement but disagree on scope – Ioannidis wants to keep core technology…
S26
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — This comment is exceptionally thought-provoking because it addresses the critical tension between AI efficiency and publ…
S27
Why science metters in global AI governance — Because there’s not enough past evidence to be sure that a particular tipping point is going to happen. So the situation…
S28
Generative AI: Steam Engine of the Fourth Industrial Revolution? — As AI continues to advance, it becomes increasingly important to focus on developing critical thinking skills. Regardles…
S29
Shaping the Future AI Strategies for Jobs and Economic Development — Healthcare applications received particular attention, with examples from countries like Guyana demonstrating how teleme…
S30
Shaping the Future AI Strategies for Jobs and Economic Development — Evidence:The speed at which technology is changing requires adaptation from all participants in the workforce and societ…
S31
Building the AI-Ready Future From Infrastructure to Skills — A central theme was the importance of open ecosystems in AI development. Zacharia drew parallels between Android’s succe…
S32
Building the AI-Ready Future From Infrastructure to Skills — “And so we at AMD have a commitment to make both our hardware infrastructure and our software infrastructure to be based…
S33
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Summary:All speakers strongly advocate for open, interoperable standards that enable cross-vendor compatibility and prev…
S34
AI/Gen AI for the Global Goals — Boa-Gue mentions the African Startup Policy Framework as an example of an initiative to enable member states to develop …
S35
AI for Good Technology That Empowers People — Summary:The discussion revealed relatively low levels of direct disagreement, with most speakers focusing on complementa…
S36
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Disagreement level:The disagreement level is moderate but significant, particularly around philosophical approaches to s…
S37
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Summary:Reddy argues for completely autonomous edge-based AI systems that are cut off from the cloud to maintain privacy…
S38
Edge AI gains momentum in Europe’s innovation strategy — Europe is accelerating efforts tobuild digital sovereigntythrough high-performance technologies that do not increase pow…
S39
India’s AI Future Sovereign Infrastructure and Innovation at Scale — The panel articulated a sophisticated approach to AI sovereignty that goes beyond technological nationalism. Success req…
S40
AI as critical infrastructure for continuity in public services — The discussion revealed that data sovereignty encompasses more than simple data localization. As Pramod noted, true sove…
S41
The Global Power Shift India’s Rise in AI & Semiconductors — Thank you. Thank you. across CPUs, GPUs, SoCs, and AI engines that power cutting -edge compute systems worldwide. She br…
S42
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “What raw material is needed for AI?”[9]. “sovereign AI comes to India, we’ll have the control”[56]. “Indian government …
S43
Indias Roadmap to an AGI-Enabled Future — The India AI mission has dramatically increased the country’s GPU infrastructure, providing a foundation for building so…
S44
The Intelligent Coworker: AI’s Evolution in the Workplace — And we can’t let that happen. So I think we need to kind of have mixed emotions when we’re addressing that question and …
S45
Eurotech and PNY move to accelerate high-performance edge computing — Eurotech and PNY Technologies havesigned a strategic MoUintended to accelerate high-performance edge AI deployments acro…
S46
Designing Indias Digital Future AI at the Core 6G at the Edge — Consensus level:High level of consensus across technical, policy, and strategic dimensions. The alignment suggests a mat…
S47
Laying the foundations for AI governance — High level of consensus on problem identification and broad solution directions, suggesting significant potential for co…
S48
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The discussion revealed surprisingly few fundamental disagreements among speakers, with most differences being complemen…
S49
Discussion Report: Sovereign AI in Defence and National Security — Faisal advocates for a strategic approach where countries focus their limited sovereign resources on the most critical c…
S50
Building the AI-Ready Future From Infrastructure to Skills — This discussion focused on building AI readiness and capabilities, featuring speakers from AMD and the Indian government…
S51
Building the AI-Ready Future From Infrastructure to Skills — Public-private partnership model established for American Science Cloud running on MI355 cluster infrastructure
S52
The Global Power Shift India’s Rise in AI &amp; Semiconductors — “So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resource…
S53
Connecting open code with policymakers to development | IGF 2023 WS #500 — Henri Verdier:If I can say something, because that’s very important. So most of the people that went to work with me did…
S54
Indias AI Leap Policy to Practice with AIP2 — Thank you, Access Partnership for having me this afternoon for this conversation. I think it’s an important element. How…
S55
Indias AI Leap Policy to Practice with AIP2 — Just prior to this I was having a conversation with a large corporate and how they can actually use startups as a cataly…
S56
Scaling Innovation Building a Robust AI Startup Ecosystem — Arita emphasized STPI’s role as a nurturing body that facilitates important connections between startups and the broader…
S57
Driving Indias AI Future Growth Innovation and Impact — Skills, Talent Pipeline, and Developer Ecosystem
S58
The Foundation of AI Democratizing Compute Data Infrastructure — Compute intensity, model scaling and hardware outlook
S59
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Thank you, Ashish. You’ve done a fantastic job in a short time period covering the larger macro issues connected with th…
S60
Keeping AI in check — Societies should not be forgetful of the fact that technology is a product of the human mind and that the most intellige…
S61
Ethical AI_ Keeping Humanity in the Loop While Innovating — Role of regulation and global governance
S62
Part 2.5: AI reinforcement learning vs human governance — Governance structures are designed to maintain order, protect rights, and promote welfare, often requiring consensus and…
S63
Open Forum #26 High-level review of AI governance from Inter-governmental P — 1. Balancing Innovation and Security: Governments face the task of fostering innovation while addressing potential risks…
S64
AI Without the Cost Rethinking Intelligence for a Constrained World — Summary:While both speakers critique current GPU-centric approaches, they differ on solutions. Bernie advocates for movi…
S65
AI Without the Cost Rethinking Intelligence for a Constrained World — GPU-based infrastructure creates expensive, high heat generating, high failure rate systems with limited supply While b…
S66
From KW to GW Scaling the Infrastructure of the Global AI Economy — Summary:All speakers agree that the traditional approach of designing data centers from grid infrastructure inward is ou…
S67
https://dig.watch/event/india-ai-impact-summit-2026/building-the-ai-ready-future-from-infrastructure-to-skills — And Manhattan Project, about 65 % of the entire funding of Manhattan Project was at Oak Ridge National Laboratory. And i…
S68
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued)/ part 2 — Chair: Thank you very much, Estonia, for your contribution. New Zealand to be followed by Brazil. Chair: Thank you very…
S69
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240/ OEWG 2025 — China: I thank you, Mr. Chairman. I would like to exercise the right of reply to the statement made by the U.S. delega…
S70
morning session — The analysis discusses three topics related to Goal 16: Peace, Justice and Strong Institutions and Goal 17: Partnerships…
S71
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued)/ part 6 — Chair: Thank you very much, Russian Federation, for your statement and also for your… very kind words. One day I’ll si…
S72
From algorithms to Armageddon: The rise of AI in nuclear decision-making — The application of AI could improve operational efficiency and safety, as well as decisions made in the broader nuclear …
S73
[Tentative Translation] — 161 A comprehensive database of Japanese researchers operated by the Japan Society for the Promotion of Science. Resear…
S74
Pre 2: The Council of Europe Framework Convention on AI and Guidance for the Risk and Impact Assessment of AI Systems on Human Rights, Democracy and Rule of Law (HUDERIA) — Jasper Finke, Legal Officer of the Federal Ministry of Justice and Head of the German Delegation to the Committee on Art…
S75
https://app.faicon.ai/ai-impact-summit-2026/conversation-01 — So there isn’t a one model fits all when it comes to regulating technology. And I think as well, there isn’t a country t…
S76
https://dig.watch/event/india-ai-impact-summit-2026/ensuring-safe-ai_-monitoring-agents-to-bridge-the-global-assurance-gap — And this is almost like a test for me of kind of saying. These names of these institutions through this panel. But they …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Thomas Zacharia
11 arguments129 words per minute2769 words1283 seconds
Argument 1
AI readiness extends beyond GPUs; AMD offers end‑to‑end capability
EXPLANATION
Thomas argues that AI is broader than just GPUs and that AMD provides a complete AI stack, from AI PCs to core infrastructure and edge solutions, to meet the full spectrum of AI needs.
EVIDENCE
He notes that there is an over-indexing on GPUs in AI discussions, but emphasizes that AMD delivers a full suite of AI capability covering AI PCs, core infrastructure, and edge deployments [7-12].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes that AI discussions over-index on GPUs and that AMD provides a full AI stack covering PCs, infrastructure and edge solutions, highlighting the broader AI readiness beyond GPUs [S3]. AMD’s commitment to open standards and open-source platforms further supports this end-to-end capability [S1].
MAJOR DISCUSSION POINT
Sovereign AI and National Readiness
Argument 2
Genesis Initiative: AI to accelerate scientific discovery via public‑private partnership
EXPLANATION
Thomas describes the U.S. Department of Energy’s Genesis Initiative, which uses AI to speed up scientific research through a public‑private partnership and the American Science Cloud platform.
EVIDENCE
He explains the Genesis Initiative’s goal of using AI for scientific discovery, its funding structure, and the creation of the American Science Cloud run on an MI355 cluster as a public-private partnership with AMD [16-20][46-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Genesis Initiative, a public-private partnership creating the American Science Cloud on an AMD MI355 cluster for scientific discovery, is described in the sources [S1][S3].
MAJOR DISCUSSION POINT
Sovereign AI and National Readiness
AGREED WITH
Paneerselvam M, Gilles Garcia, Timothy Robson
Argument 3
Human‑in‑the‑loop governance is essential for safe AI deployment
EXPLANATION
Thomas stresses that AI systems must include human oversight, especially in research and national‑security contexts, to prevent unintended consequences.
EVIDENCE
He outlines a governance model where a professor or peer-review committee must validate AI outputs before they are released, ensuring a human remains in the loop [62-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-in-the-loop governance is emphasized, with the sources distinguishing governance from regulation and stressing guardrails and human validation for AI outputs [S3][S13][S17].
MAJOR DISCUSSION POINT
Sovereign AI and National Readiness
Argument 4
Encouragement for startups to stay curious and leverage powerful as well as lightweight AMD solutions
EXPLANATION
Thomas urges startups to remain inquisitive, leveraging both high‑performance GPUs for large workloads and low‑power edge solutions for broader applications.
EVIDENCE
He tells the audience to stay curious, noting that while many GPUs are important, AMD also offers lightweight, low-power edge solutions to complement them [247-249].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker urges startups to stay curious and notes AMD’s provision of both high-performance GPUs and lightweight low-power edge solutions for broader applications [S3].
MAJOR DISCUSSION POINT
Role of Startups and Ecosystem in AI Adoption
AGREED WITH
Paneerselvam M, Timothy Robson, Gilles Garcia
Argument 5
AMD’s commitment to open standards and open‑source platforms prevents vendor lock‑in
EXPLANATION
Thomas highlights AMD’s strategy of building hardware and software on open standards, enabling innovators to avoid being locked into a single vendor’s ecosystem.
EVIDENCE
He cites AMD’s dedication to open ecosystems, open-source platforms, and open standards for both hardware and software, allowing unrestricted innovation [70-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AMD’s strategy of building hardware and software on open standards to avoid vendor lock-in is highlighted in the sources [S1][S3].
MAJOR DISCUSSION POINT
Open Software Ecosystem and Hardware‑Agnostic Support
AGREED WITH
Timothy Robson, Gilles Garcia
Argument 6
AMD provides lightweight, low‑power edge solutions alongside high‑performance GPUs
EXPLANATION
Thomas points out that AMD delivers both powerful GPU clusters for compute‑intensive tasks and low‑power accelerators for edge AI, supporting a full spectrum of deployment scenarios.
EVIDENCE
He mentions AMD’s open-standard approach that supports edge deployments and describes autonomous agent loops that can run on many GPUs while still requiring human validation, illustrating the blend of high-performance and edge capabilities [71-73][78-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AMD’s portfolio includes lightweight, low-power edge accelerators alongside powerful GPU clusters, supporting diverse deployment scenarios [S3].
MAJOR DISCUSSION POINT
Edge / Physical AI and Specialized Accelerators
AGREED WITH
Gilles Garcia
Argument 7
Federated compute and data infrastructure with cloud‑enabled lab operations, secured by design, is essential for sovereign AI.
EXPLANATION
Thomas argues that to achieve sovereign AI capabilities, governments must integrate data from national labs, academia, and industry through a federated compute and data model. This requires cloud‑enabled laboratory operations and security‑by‑design governance to protect public‑private partnerships.
EVIDENCE
He explains that the U.S. Department of Energy operates large multi-billion-dollar light and neutron sources that need to be incorporated into a unified system, requiring federated compute and data, cloud-enabled lab operations, and security and governance by design, especially for public-private partnerships [45-48][47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A federated compute and data model that integrates national labs, academia and industry, secured by design, is discussed as essential for sovereign AI, aligning with the emphasis on data sovereignty and secure AI infrastructure in the sources [S15][S16][S12].
MAJOR DISCUSSION POINT
Sovereign AI and National Readiness
Argument 8
Investing in talent development, research enablement, and startup innovation labs is critical to sustain AI readiness and national competitiveness.
EXPLANATION
Thomas emphasizes that AI readiness rests on a strong talent pipeline, access to compute and models for researchers, and an ecosystem that encourages startups to experiment and innovate. Continuous questioning and validation are needed to keep the AI ecosystem vibrant.
EVIDENCE
He notes that AI readiness depends on talent, giving people access to compute and models, and that research enablement is key because it allows constant questioning and innovation, while startup innovation labs turn ideas into new companies [66-69][67-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of talent pipelines, research enablement and startup innovation labs for AI readiness is echoed in the sources that stress community participation and talent development for AI ecosystems [S15][S3].
MAJOR DISCUSSION POINT
Role of Startups and Ecosystem in AI Adoption
Argument 9
AMD’s track record of delivering world‑leading supercomputing and exascale systems demonstrates its ability to provide energy‑efficient, high‑performance AI compute.
EXPLANATION
Thomas highlights his three‑decade experience in deploying top‑ranked supercomputers, including the first exascale system built under a strict power budget, showcasing AMD’s expertise in creating powerful yet energy‑conscious AI hardware.
EVIDENCE
He recounts having led dozens of supercomputing deployments, with the last four or five being number-one systems in the TOP500, and describes the Exascale program that delivered a sub-20 MW exascale machine (Valfrontier) after a decade of effort [50-52][85-91].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Thomas’s background leading world-leading supercomputers, including the first exascale system, is documented in the source describing his role at Oak Ridge and AMD’s high-performance computing leadership [S10].
MAJOR DISCUSSION POINT
Edge / Physical AI and Specialized Accelerators
Argument 10
Autonomous agentic AI loops can accelerate hypothesis‑driven experimentation while retaining a human‑in‑the‑loop to prevent unintended outcomes.
EXPLANATION
Thomas proposes an autonomous loop where thousands of AI agents run experiments rapidly, but the results are only released after human validation, ensuring safety and ethical oversight.
EVIDENCE
He describes an inner loop where 100,000 GPUs act as agents to conduct hypothesis-driven experiments, yet the outcomes are not updated until a human validates them to avoid unintended consequences [78-81].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of autonomous agentic AI loops with human-in-the-loop safeguards is presented as ‘agentic AI’ with human validation in the sources [S1][S13][S17].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop governance is essential for safe AI deployment
Argument 11
Projected scaling of AI compute to Yara, Zeta and a 10,000‑factor underscores the need for continued hardware investment to meet future demand.
EXPLANATION
Thomas forecasts that AI adoption will grow from the current Yara scale to Zeta and eventually a 10,000‑factor, implying that compute capacity must expand dramatically to support future problem‑solving.
EVIDENCE
He explains that AI adoption is moving toward the Yara scale, then Zeta, and ultimately a 10,000-factor, suggesting that compute will need to keep pace with this rapid growth [94-95].
MAJOR DISCUSSION POINT
Edge / Physical AI and Specialized Accelerators
P
Paneerselvam M
4 arguments156 words per minute534 words205 seconds
Argument 1
Indian sovereign AI model with five‑layer infrastructure to serve SMEs and society
EXPLANATION
Paneerselvam outlines India’s sovereign AI architecture, a five‑layer model designed to provide AI capabilities across the nation, especially for small and medium enterprises and broader societal needs.
EVIDENCE
He describes the transformation in AI readiness, the five-layer infrastructure, and the goal of improving the AI readiness quotient for SMEs and the wider economy [106-112].
MAJOR DISCUSSION POINT
Sovereign AI and National Readiness
AGREED WITH
Thomas Zacharia, Gilles Garcia, Timothy Robson
Argument 2
Startups act as AI‑native innovators that raise the AI readiness quotient for SMEs
EXPLANATION
He emphasizes that startups, being AI‑native, are critical for elevating AI adoption among small and medium enterprises, driving broader economic growth.
EVIDENCE
Paneerselvam notes that startups bring AI expertise, can demonstrate value quickly, and help improve the AI readiness quotient for SMEs [106-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Startups are highlighted as AI-native innovators that can quickly demonstrate value and raise the AI readiness quotient for SMEs, consistent with the discussion of startups driving AI adoption in the sources [S3][S15].
MAJOR DISCUSSION POINT
Role of Startups and Ecosystem in AI Adoption
AGREED WITH
Thomas Zacharia, Timothy Robson, Gilles Garcia
Argument 3
Indian infrastructure plans include edge layers to bring AI to all societal levels
EXPLANATION
He states that India’s AI strategy incorporates edge computing layers so that AI benefits reach every segment of society, not just large corporations.
EVIDENCE
He mentions that the sovereign model’s layers are being built to ensure AI permeates all societal levels, from large enterprises down to individual users [112-113].
MAJOR DISCUSSION POINT
Edge / Physical AI and Specialized Accelerators
Argument 4
The 267,000 summit registrations reflect strong public interest and validate the need for widespread AI readiness initiatives across India.
EXPLANATION
Paneerselvam points out that the massive number of registrations demonstrates a high level of curiosity and enthusiasm among Indian youth and professionals, confirming the demand for AI education and readiness programs.
EVIDENCE
He notes that the summit received 267,000 registrations in five days, generating pride and excitement among youngsters from across the country who want to understand AI’s impact and opportunities [110-112].
MAJOR DISCUSSION POINT
Sovereign AI and National Readiness
T
Timothy Robson
7 arguments167 words per minute2753 words986 seconds
Argument 1
Open‑ecosystem collaboration with governments (e.g., Finland) to build multilingual LLMs
EXPLANATION
Timothy explains a partnership with Finland’s Lumi supercomputer to create language‑specific LLMs for low‑resource languages, illustrating how open ecosystems enable multilingual AI development.
EVIDENCE
He details the collaboration with Lumi in Finland to adapt large language models for Finnish and other low-resource Indian languages, highlighting the need for multilingual LLMs and the role of public-private cooperation [134-142].
MAJOR DISCUSSION POINT
Sovereign AI and National Readiness
AGREED WITH
Thomas Zacharia, Paneerselvam M, Gilles Garcia
Argument 2
AMD Developer Cloud provides free compute, Docker containers and accelerator support to move startups from POC to production
EXPLANATION
Timothy promotes AMD’s Developer Cloud, offering free compute hours, ready‑to‑run Docker containers, and accelerator access to help startups transition from proof‑of‑concept to production.
EVIDENCE
He outlines the availability of 50-100 free compute hours, pre-built Docker images, and seamless integration with models from Hugging Face, enabling startups to focus on their applications rather than infrastructure setup [187-196].
MAJOR DISCUSSION POINT
Role of Startups and Ecosystem in AI Adoption
AGREED WITH
Thomas Zacharia, Paneerselvam M, Gilles Garcia
Argument 3
Primus open‑source stack and day‑zero model support enable immediate, vendor‑neutral AI workloads
EXPLANATION
Timothy describes AMD’s Primus open‑source ecosystem and day‑zero support, which allow new AI models to run on AMD hardware out‑of‑the‑box without vendor lock‑in.
EVIDENCE
He mentions the Primus toolchain that supports training for Indic languages and notes that models like Quen3 Codex, Baidu Paddle, and DeepSeek received day-zero support on AMD, demonstrating immediate, vendor-agnostic compatibility [156-162][206-211].
MAJOR DISCUSSION POINT
Open Software Ecosystem and Hardware‑Agnostic Support
AGREED WITH
Thomas Zacharia, Gilles Garcia
Argument 4
Availability of diverse accelerators (TPU, Inferentia, FPGA) through neo‑clouds expands compute options beyond traditional GPUs
EXPLANATION
Timothy highlights that Indian neo‑cloud providers offer a range of accelerators—including TPUs, Inferentia, and FPGAs—giving users flexibility beyond AMD GPUs.
EVIDENCE
He describes the neo-cloud ecosystem that supplies various accelerators, enabling users to select the most suitable compute for their workloads and pricing considerations [173-177].
MAJOR DISCUSSION POINT
Edge / Physical AI and Specialized Accelerators
Argument 5
The rapid evolution of AI after ChatGPT requires continuous learning and adaptation by all stakeholders.
EXPLANATION
Timothy observes that the launch of ChatGPT dramatically shifted perceptions of AI, and he predicts that everyone’s understanding will keep changing, highlighting the necessity for ongoing education and flexibility.
EVIDENCE
He states that since ChatGPT’s launch on 30 Nov 2022, the world changed, and he bets that everyone’s view of AI has changed multiple times, emphasizing the fast-moving nature of the field [115-120].
MAJOR DISCUSSION POINT
Sovereign AI and National Readiness
AGREED WITH
Thomas Zacharia
Argument 6
AMD’s collaboration with hyperscalers and the Helios system demonstration illustrate its capacity to deliver large‑scale AI infrastructure for enterprises.
EXPLANATION
Timothy highlights that AMD GPUs are present in every hyperscaler and points attendees to the Helios rack on display, showing AMD’s ability to provide massive compute resources for enterprise AI workloads.
EVIDENCE
He mentions that AMD’s GPUs are in every hyperscaler globally, references the Helios system on display, and invites attendees to see it as an example of large-scale AI infrastructure [147-151].
MAJOR DISCUSSION POINT
Edge / Physical AI and Specialized Accelerators
Argument 7
Promoting open‑source frameworks such as PyTorch, JAX and Triton abstracts hardware, enabling developers to stay hardware‑agnostic and fostering innovation.
EXPLANATION
Timothy explains that most customers write code in PyTorch, a vendor‑agnostic open‑source framework, and that tools like JAX and Triton further decouple software from specific hardware, allowing flexible and optimized AI development.
EVIDENCE
He describes that 99 % of customers use Python under PyTorch, notes the emergence of JAX, and explains that Triton provides a compiler-level language that lets developers run code on any hardware, keeping them hardware-agnostic [209-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The prevalence of PyTorch, JAX and Triton as open-source frameworks that abstract hardware and keep developers hardware-agnostic is noted in the sources [S3].
MAJOR DISCUSSION POINT
Open Software Ecosystem and Hardware‑Agnostic Support
G
Gilles Garcia
5 arguments177 words per minute624 words211 seconds
Argument 1
Physical AI at the edge is a key component of sovereign capability
EXPLANATION
Gilles asserts that moving AI to the edge and far‑edge requires dedicated, low‑power accelerators, forming an essential part of a nation’s sovereign AI capability.
EVIDENCE
He explains that edge AI demands specialized hardware that can act locally without cloud latency, emphasizing the need for dedicated low-power accelerators beyond traditional GPUs [230-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Physical AI at the edge is identified as a component of sovereign AI capability, aligning with the sources that treat AI as critical infrastructure and emphasize edge deployment for national readiness [S16][S15].
MAJOR DISCUSSION POINT
Sovereign AI and National Readiness
AGREED WITH
Thomas Zacharia, Paneerselvam M, Timothy Robson
Argument 2
Startup‑built physical AI examples (e.g., Gene01 humanoid) illustrate new market opportunities
EXPLANATION
Gilles cites the Gene01 humanoid, built by an Italian startup on AMD technology, as a showcase of how startups can create innovative physical AI products for emerging markets.
EVIDENCE
He references the Gene01 humanoid presented at CES, which integrates AMD accelerators to enable perception, visualization, and tactile interaction without relying on centralized cloud resources [239-240].
MAJOR DISCUSSION POINT
Role of Startups and Ecosystem in AI Adoption
AGREED WITH
Thomas Zacharia, Paneerselvam M, Timothy Robson
Argument 3
Full‑stack hardware/software with open‑source tools is required for reliable edge AI
EXPLANATION
Gilles emphasizes that delivering dependable edge AI solutions requires an integrated stack of hardware and open‑source software, ensuring performance and flexibility.
EVIDENCE
He mentions AMD’s full-stack approach that combines hardware accelerators with open-source software stacks, enabling robots and edge devices to act safely and reliably [233-239].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A full-stack approach combining hardware accelerators with open-source software is described as necessary for reliable edge AI, matching the source’s emphasis on integrated hardware/software stacks and open tools [S3][S1].
MAJOR DISCUSSION POINT
Open Software Ecosystem and Hardware‑Agnostic Support
AGREED WITH
Thomas Zacharia, Timothy Robson
Argument 4
Edge and far‑edge AI demand dedicated low‑power accelerators, not just GPUs
EXPLANATION
Gilles reiterates that edge AI workloads need purpose‑built, low‑power accelerators rather than conventional high‑power GPUs to meet latency and power constraints.
EVIDENCE
He repeats the need for specialized, low-power accelerators for edge and far-edge AI, contrasting them with standard GPU solutions [230-236][238-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Edge and far-edge AI workloads require dedicated low-power accelerators rather than traditional GPUs, a point made in the sources discussing low-power edge solutions and AI as critical infrastructure [S3][S16].
MAJOR DISCUSSION POINT
Edge / Physical AI and Specialized Accelerators
AGREED WITH
Thomas Zacharia
Argument 5
AMD’s EMD portfolio provides a range of low‑power, high‑reliability accelerators that can support India’s industrial AI growth and edge applications.
EXPLANATION
Gilles notes that AMD’s EMD product line includes specialized accelerators designed for edge AI in sectors such as medical devices, autonomous vehicles, and industrial plants, enabling India to build a robust edge AI ecosystem.
EVIDENCE
He states that the EMD portfolio offers accelerators suitable for medical, autonomous networks, cars, plants, and industrial use cases, which can help India develop edge AI capabilities [242-245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AMD’s low-power edge accelerator portfolio is referenced in the discussion of AMD’s lightweight edge solutions, supporting the claim about the EMD portfolio’s relevance for industrial AI growth [S3].
MAJOR DISCUSSION POINT
Physical AI at the edge is a key component of sovereign capability
M
Moderator
1 argument11 words per minute65 words337 seconds
Argument 1
The moderator emphasizes the METI Startup Hub’s role in connecting government, industry and entrepreneurs to accelerate AI innovation in India.
EXPLANATION
The moderator introduces Paneerselvam M, highlighting his leadership in advancing India’s startup ecosystem and fostering partnerships among the government, industry, and entrepreneurs, underscoring the hub’s importance for AI development.
EVIDENCE
He invites Paneerselvam M, describing him as a distinguished leader with two decades of experience in innovation, management, strategic growth, and market development, instrumental in advancing India’s startup ecosystem and fostering impactful partnerships between government, industry and entrepreneurs [102-105].
MAJOR DISCUSSION POINT
Role of Startups and Ecosystem in AI Adoption
Agreements
Agreement Points
Open ecosystem and open‑source platforms prevent vendor lock‑in and enable hardware‑agnostic AI development
Speakers: Thomas Zacharia, Timothy Robson, Gilles Garcia
AMD’s commitment to open standards and open‑source platforms prevents vendor lock‑in Primus open‑source stack and day‑zero model support enable immediate, vendor‑neutral AI workloads Full‑stack hardware/software with open‑source tools is required for reliable edge AI
All three speakers stress that AMD’s strategy and collaborations rely on open standards and open-source software (e.g., Primus, PyTorch, Triton) to avoid vendor lock-in and allow developers to run AI workloads on any hardware [70-73][156-162][206-211][233-239].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus aligns with calls for open, interoperable AI standards to avoid vendor lock-in, as emphasized in U.S. AI standards discussions and industry commitments to open standards [S33][S31][S32].
Start‑ups are critical AI‑native innovators and should be supported with tools and resources to move from proof‑of‑concept to production
Speakers: Thomas Zacharia, Paneerselvam M, Timothy Robson, Gilles Garcia
Encouragement for startups to stay curious and leverage powerful as well as lightweight AMD solutions Startups act as AI‑native innovators that raise the AI readiness quotient for SMEs AMD Developer Cloud provides free compute, Docker containers and accelerator support to move startups from POC to production Startup‑built physical AI examples (e.g., Gene01 humanoid) illustrate new market opportunities
Thomas urges startups to stay curious and use AMD’s hardware; Paneerselvam highlights startups as AI-native drivers for SMEs; Timothy offers free compute, Docker images and accelerator access via AMD Developer Cloud; Gilles points to startup-built physical AI products as market opportunities, all underscoring the pivotal role of startups in AI adoption [247-249][106-108][187-196][239-240].
POLICY CONTEXT (KNOWLEDGE BASE)
The African Startup Policy Framework explicitly aims to foster AI startups through supportive policies and incentives, reflecting broader agreement on startup support [S34].
Edge and low‑power accelerators are essential alongside high‑performance GPUs for sovereign AI capability
Speakers: Thomas Zacharia, Gilles Garcia
AMD provides lightweight, low‑power edge solutions alongside high‑performance GPUs Physical AI at the edge is a key component of sovereign capability Edge and far‑edge AI demand dedicated low‑power accelerators, not just GPUs
Thomas notes AMD’s lightweight, low-power edge offerings complement its GPU clusters, while Gilles emphasizes that sovereign AI requires dedicated low-power edge accelerators rather than traditional GPUs [71-73][78-80][230-236][238-244].
POLICY CONTEXT (KNOWLEDGE BASE)
European strategies stress hybrid cloud-edge architectures and low-power edge AI to achieve digital sovereignty, and industry partnerships such as Eurotech-PNY target high-performance edge deployments [S38][S45].
Sovereign AI and national readiness require public‑private partnerships, federated compute/data, and multi‑layer infrastructure
Speakers: Thomas Zacharia, Paneerselvam M, Gilles Garcia, Timothy Robson
Genesis Initiative: AI to accelerate scientific discovery via public‑private partnership Indian sovereign AI model with five‑layer infrastructure to serve SMEs and society Physical AI at the edge is a key component of sovereign capability Open‑ecosystem collaboration with governments (e.g., Finland) to build multilingual LLMs
Thomas describes the U.S. Genesis Initiative as a public-private AI effort; Paneerselvam outlines India’s five-layer sovereign AI architecture; Gilles links edge AI to sovereign capability; Timothy cites a Finland-AMD collaboration for multilingual LLMs, all illustrating the need for coordinated, multi-layer, public-private AI strategies [16-20][46-49][106-112][230-236][134-142].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s sovereign AI roadmap highlights coordinated public-private action, federated compute, and data governance as core to national AI readiness [S39][S40].
AI is evolving rapidly; continuous learning and adaptation are required from all stakeholders
Speakers: Thomas Zacharia, Timothy Robson
If you think that AI is just like the Prime Minister said, it’s just the early stages The rapid evolution of AI after ChatGPT requires continuous learning and adaptation by all stakeholders.
Thomas warns that AI is still in early stages and calls for openness; Timothy notes that the launch of ChatGPT dramatically shifted AI understanding and that ongoing learning is essential [59-60][115-120].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple reports note the speed of AI change demands continuous upskilling and learning across the workforce and society [S28][S29][S30].
Similar Viewpoints
Both speakers argue that achieving sovereign AI requires integrating data and compute across national labs, academia and industry through a federated, cloud‑enabled, secure infrastructure, whether described as a federated model (Thomas) or a five‑layer national architecture (Paneerselvam) [45-48][47][106-112].
Speakers: Thomas Zacharia, Paneerselvam M
Federated compute and data infrastructure with cloud‑enabled lab operations, secured by design, is essential for sovereign AI. Indian sovereign AI model with five‑layer infrastructure to serve SMEs and society
Both emphasize that open‑source software stacks (e.g., Primus, PyTorch) allow AI developers to remain hardware‑agnostic and avoid vendor lock‑in, supporting a flexible ecosystem [70-73][156-162].
Speakers: Thomas Zacharia, Timothy Robson
AMD’s commitment to open standards and open‑source platforms prevents vendor lock‑in Primus open‑source stack and day‑zero model support enable immediate, vendor‑neutral AI workloads
Unexpected Consensus
Agreement on the importance of low‑power edge accelerators between a high‑performance supercomputing champion and an edge‑AI specialist
Speakers: Thomas Zacharia, Gilles Garcia
AMD provides lightweight, low‑power edge solutions alongside high‑performance GPUs Physical AI at the edge is a key component of sovereign capability Edge and far‑edge AI demand dedicated low‑power accelerators, not just GPUs
Thomas, known for leading exascale supercomputing projects, and Gilles, focused on edge physical AI, both stress that low-power edge accelerators are essential for sovereign AI, which is surprising given their traditionally different hardware focus [71-73][78-80][230-236][238-244].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry dialogues highlight consensus that low-power edge accelerators complement supercomputing, echoed in edge AI discussions and hybrid strategies [S35][S38].
Overall Assessment

The speakers show strong convergence on four major themes: (1) the necessity of open‑source, open‑standard ecosystems to avoid vendor lock‑in; (2) the pivotal role of startups, supported by AMD tools and public‑private initiatives; (3) the strategic importance of low‑power edge accelerators as part of sovereign AI; (4) the need for coordinated, multi‑layer public‑private frameworks (e.g., Genesis Initiative, Indian five‑layer model) to achieve national AI readiness. Additionally, there is shared recognition of AI’s rapid evolution demanding continuous learning.

High consensus across hardware, software, policy and startup perspectives, indicating a unified vision that AI readiness must combine open ecosystems, startup empowerment, edge capabilities, and sovereign, federated infrastructure. This broad agreement suggests that future policy and industry actions can build on a common foundation to advance AI for national development.

Differences
Different Viewpoints
Centralized large‑scale compute versus decentralized low‑power edge accelerators for sovereign AI
Speakers: Thomas Zacharia, Gilles Garcia
Thomas emphasizes massive centralized compute (American Science Cloud, exascale systems, scaling to Yara/Zeta/10,000‑factor) as the backbone of AI readiness Gilles stresses that edge and far‑edge AI require dedicated low‑power accelerators, not high‑power GPUs, to act locally without cloud latency
Thomas describes a strategy built on huge centralized clusters such as the American Science Cloud (MI355) and future scaling to Yara, Zeta and a 10,000-factor of compute [48-49][94-95], whereas Gilles argues that true sovereign capability must rely on edge-focused, low-power accelerators that can operate without cloud dependence [230-236][238-244]. The two positions diverge on where the primary investment and architectural focus should lie.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on centralized vs distributed AI infrastructure appear in heterogeneous compute policy analyses and national AI sovereignty discussions, reflecting divergent priorities [S36][S48][S49].
Suitability of GPUs versus purpose‑built accelerators for AI workloads
Speakers: Thomas Zacharia, Gilles Garcia
Thomas notes an over‑indexing on GPUs and promotes AMD’s full AI stack that includes GPUs but also broader infrastructure Gilles claims GPUs are not appropriate for edge AI and that low‑power, purpose‑built accelerators are essential
Thomas points out that AI discussions over-emphasise GPUs and that AMD provides a complete suite from PCs to edge [7-12], while Gilles repeatedly states that edge AI cannot rely on GPUs and needs specialised low-power accelerators [230-236][238-244], reflecting a tension over the role of GPUs in future AI deployments.
POLICY CONTEXT (KNOWLEDGE BASE)
Partnerships leveraging NVIDIA GPUs for edge (Eurotech-PNY) and national GPU scaling initiatives illustrate the trade-off between general-purpose GPUs and specialized accelerators [S45][S41].
Unexpected Differences
Scale‑first versus edge‑first AI strategy within the same company
Speakers: Thomas Zacharia, Gilles Garcia
Thomas projects a future dominated by ever‑larger centralized compute clusters (10,000‑factor) as the solution to AI challenges Gilles argues that the critical future lies in low‑power edge accelerators that can act without cloud latency
It is surprising that two senior AMD representatives advocate opposite architectural priorities-one championing massive centralized exascale systems, the other championing lightweight edge hardware-despite sharing the same corporate background [94-95][230-236][238-244]
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on centralized vs edge-first approaches in AI strategy, such as hybrid cloud-edge models, underscore this strategic tension [S36][S37][S38].
Different national models for sovereign AI
Speakers: Thomas Zacharia, Paneerselvam M
Thomas describes the U.S. DOE Genesis Initiative with a public‑private partnership centred on a national cloud (American Science Cloud) Paneerselvam outlines India’s five‑layer sovereign AI architecture aimed at SMEs and societal reach
Both present sovereign AI roadmaps but propose distinct structures-Thomas’s model relies on a single national cloud and federated compute across labs, while Paneerselvam’s model stresses a multi-layered infrastructure that permeates all societal levels-showing an unexpected divergence in how sovereign AI should be organised [48-49][106-112]
POLICY CONTEXT (KNOWLEDGE BASE)
Comparative analyses of India’s AI sovereignty roadmap, African startup policies, and defense-focused AI strategies reveal varied national models for achieving AI sovereignty [S39][S34][S49].
Overall Assessment

The discussion reveals limited outright conflict; most speakers align on the importance of open ecosystems, startup involvement, and talent development. The principal disagreements centre on architectural focus—whether AI readiness should be driven by massive centralized compute resources or by decentralized low‑power edge accelerators—and on the design of sovereign AI frameworks (U.S. cloud‑centric vs. India’s layered approach).

Low to moderate. The disagreements are technical and strategic rather than ideological, suggesting that while consensus exists on broad goals (open, inclusive AI), divergent views on implementation could affect coordination of policy and investment across regions.

Partial Agreements
Both agree that an open ecosystem is essential for AI innovation, but Thomas focuses on hardware‑level openness while Timothy emphasizes software toolchains and model support [70-73][156-162][209-218]
Speakers: Thomas Zacharia, Timothy Robson
Thomas highlights AMD’s commitment to open standards and open‑source platforms to avoid vendor lock‑in Timothy describes the Primus open‑source stack, day‑zero model support and the use of open frameworks (PyTorch, JAX, Triton) to keep developers hardware‑agnostic
All three recognise startups as a catalyst for AI adoption, yet they differ on the mechanisms: Thomas talks about talent pipelines and innovation labs, Paneerselvam on improving the SME readiness quotient, and Timothy on providing cloud resources and tooling [66-69][106-108][187-196]
Speakers: Thomas Zacharia, Paneerselvam M, Timothy Robson
Thomas stresses talent development, research enablement and startup innovation labs as core to AI readiness Paneerselvam stresses AI‑native startups raising the AI readiness quotient for SMEs and the broader economy Timothy promotes the AMD Developer Cloud, free compute and Docker containers to help startups move from POC to production
Takeaways
Key takeaways
AI readiness is a holistic challenge that goes far beyond just GPUs; it requires end‑to‑end compute, data, software, governance and edge capabilities. The U.S. Genesis Initiative illustrates a sovereign AI model that uses AI to accelerate scientific discovery, energy, and national‑security missions through public‑private partnerships. Human‑in‑the‑loop governance is essential for safe, trustworthy AI, especially in high‑impact government functions. India is developing its own sovereign AI stack with a five‑layer infrastructure aimed at serving SMEs, academia, and broader society. Start‑ups are positioned as AI‑native innovators that can raise the AI‑readiness quotient for SMEs and drive new business models. An open, standards‑based ecosystem (open‑source software, open hardware specifications, day‑zero model support) is critical to avoid vendor lock‑in and to accelerate adoption. Edge/physical AI—low‑power, dedicated accelerators for real‑time decision making—will be a key component of sovereign AI deployments. AMD is providing concrete resources: the Helios AI rack, the American Science Cloud, AMD Developer Cloud with free compute hours, Docker containers, and day‑zero support for new models.
Resolutions and action items
AMD will continue to supply and showcase the Helios rack and related AI infrastructure at the summit and future events. AMD Developer Cloud will be made available to startups with free compute credits and pre‑packaged Docker containers to accelerate POC‑to‑production workflows. AMD and the METI Startup Hub (India) will deepen collaboration on the Indian sovereign AI five‑layer model, including support for SMEs and multilingual LLM development. AMD commits to maintaining open‑source toolchains (Primus, Triton, etc.) and day‑zero support for emerging models to ensure hardware‑agnostic deployment. Partnerships with neo‑cloud providers and other accelerator vendors (TPU, Inferentia, FPGA) will be expanded to give Indian developers a broader compute palette.
Unresolved issues
Specific mechanisms for federating compute and data across national labs, academia and industry in the U.S. and Indian contexts remain undefined. Details of the governance framework (policies, standards, certification processes) for human‑in‑the‑loop AI have not been finalized. How the five‑layer Indian sovereign AI architecture will be funded, governed and scaled to reach all SMEs is still an open question. Standardization of edge‑accelerator interfaces and certification for safety‑critical applications needs further work. The roadmap for multilingual LLM development for low‑resource Indian languages, including data collection and model training responsibilities, was not fully detailed.
Suggested compromises
Adopt an open‑ecosystem approach that balances government‑driven security and sovereignty requirements with industry‑led innovation to avoid lock‑in. Implement human‑in‑the‑loop governance that allows autonomous AI execution while retaining final human validation for safety‑critical outcomes. Leverage public‑private partnerships (e.g., Genesis Initiative, Indian sovereign AI program) to share funding risk and accelerate deployment without compromising national interests.
Thought Provoking Comments
In AI, there seems to be an over-indexing of AI and GPUs. When in reality, AI is much broader. GPU is obviously a significant part, but we provide a full suite of AI capability from AI PCs to core infrastructure to the edge.
Challenges the common industry narrative that equates AI progress solely with GPU horsepower, expanding the conversation to include software, data, edge devices, and governance.
Set the thematic foundation for the whole panel, prompting other speakers to discuss broader aspects such as sovereign AI, multilingual models, and edge deployments rather than focusing only on raw compute.
Speaker: Thomas Zacharia
The Genesis Initiative aims to use AI to accelerate scientific discovery, reduce R&D cost, and federate compute and data across national labs, academia, and industry while maintaining security and governance by design.
Introduces a concrete government‑backed program that frames AI as a national‑scale, cross‑sector capability, moving the dialogue from commercial hype to public‑policy and research infrastructure.
Shifted the discussion toward sovereign AI and public‑private partnerships; later speakers (Paneerselvam and Timothy) referenced national initiatives and the need for open ecosystems to support such programs.
Speaker: Thomas Zacharia
Governance does not mean regulation; it means ensuring a human in the loop before AI outputs are used, especially for agentic systems driving innovation.
Adds a nuanced view of responsible AI that separates technical safeguards from regulatory frameworks, highlighting ethical considerations in large‑scale AI deployment.
Prompted later speakers (Timothy and Gilles) to stress open ecosystems and transparent tooling, reinforcing the idea that responsible AI requires both technical and organizational measures.
Speaker: Thomas Zacharia
We delivered the first exascale system for under 20 megawatts, proving that audacious power‑efficiency goals can rally people and achieve breakthroughs that were once thought impossible.
Provides a powerful example of how setting bold, measurable targets can drive innovation, illustrating the practical side of the earlier strategic points about compute capability.
Inspired confidence among the audience and other panelists, leading Timothy to reference AMD’s early work on large models (Bloom 176B) and reinforcing the narrative of AMD’s pioneering role.
Speaker: Thomas Zacharia
How do we get a language spoken by only 5 million people—like Finnish or many Indian languages—into a large language model? This is the challenge we tackled with Lumi in Finland and want to address for India’s 22 official languages.
Highlights the often‑overlooked multilingual dimension of AI, turning the conversation toward inclusivity, data scarcity, and the need for localized models.
Created a turning point toward discussion of language diversity, prompting references to India’s sovereign AI agenda and the importance of open‑source tooling to support low‑resource languages.
Speaker: Timothy Robson
Day‑zero support means a new model runs on AMD hardware out of the box, fully optimized and guaranteed—eliminating the myth that AMD is only for inference or that you must be tied to a single vendor.
Introduces a concrete technical promise that addresses developer friction and vendor lock‑in concerns, reinforcing the open‑ecosystem theme.
Reinforced the open‑source narrative, leading the audience to see AMD’s software stack (PyTorch, Triton, Primus) as a differentiator; later Gilles and Thomas echoed the need for flexible, vendor‑agnostic solutions.
Speaker: Timothy Robson
Physical AI is moving to the far edge—robots, autonomous cars, industrial plants—requiring dedicated accelerators that can act without relying on the cloud; AMD’s Gene01 humanoid is a proof point.
Shifts focus from data‑center AI to edge AI, introducing the concept of ultra‑low‑latency, on‑device decision making and the hardware challenges it brings.
Expanded the scope of the discussion to include edge deployments, prompting Thomas’s closing remarks about lightweight low‑power GPUs and reinforcing the panel’s message that AI solutions must be fit‑for‑purpose.
Speaker: Gilles Garcia
Start‑ups are AI natives and can drive the readiness quotient for SMEs across India; the massive registration (267,000) shows a hunger for AI knowledge and a need to democratize access beyond large corporates.
Emphasizes the ecosystem dimension—how startups and SMEs, not just large labs or hyperscalers, are crucial for national AI readiness and inclusive growth.
Balanced the high‑level policy and hardware talks with a grassroots perspective, encouraging the panel to stress accessibility, open tools, and support programs (e.g., AMD Developer Cloud) for smaller players.
Speaker: Paneerselvam M
Overall Assessment

The discussion was shaped by a series of pivot points that broadened the view of AI from a GPU‑centric, cloud‑only narrative to a holistic ecosystem encompassing sovereign research initiatives, multilingual inclusivity, responsible governance, edge deployments, and startup empowerment. Thomas Zacharia’s opening framing and the Genesis Initiative set the strategic backdrop, while Timothy’s multilingual challenge and day‑zero support concept injected concrete technical and societal challenges. Gilles’s edge‑AI example and Paneerselvam’s focus on SMEs added depth to the hardware and ecosystem dimensions. Together, these comments redirected the conversation toward an open, inclusive, and purpose‑driven AI future, influencing subsequent speakers to reinforce the themes of openness, accessibility, and responsible innovation.

Follow-up Questions
How can data from national labs, academia, and private sector be integrated to accelerate scientific discovery using AI?
Thomas highlighted the need to federate compute and data across diverse research institutions and asked how to integrate these disparate sources effectively.
Speaker: Thomas Zacharia
What is the best way to build a federated compute and data infrastructure with cloud‑enabled lab operations, security, and governance for public‑private partnerships?
He emphasized the challenges of security, governance by design, and composable standards when creating a national AI cloud.
Speaker: Thomas Zacharia
How can low‑resource Indian languages (e.g., Bodo, Konkani, Dogri, Sindhi, Nepali) be incorporated into large language models to create an inclusive Indian LLM?
Tim discussed the difficulty of adapting existing LLMs to languages spoken by fewer than 5 million people and sought solutions for multilingual model training.
Speaker: Timothy Robson
What processes and tools are needed to move AI research prototypes into enterprise‑ready solutions that can be used by employees within corporations?
He asked how to transition from research or accelerator environments to production use cases inside companies, highlighting the gap between proof‑of‑concept and operational deployment.
Speaker: Timothy Robson
What are the trade‑offs between different Kubernetes service offerings (hyperscalers vs. Neo clouds) for AI workloads in terms of cost, latency, and manageability?
Tim suggested evaluating various Kubernetes‑based cloud options to help startups choose the right compute platform.
Speaker: Timothy Robson
How can the lessons learned from cloud‑based AI be transferred to physical AI at the edge, such as robotics, autonomous vehicles, and industrial systems?
Gilles raised the challenge of adapting cloud AI capabilities to low‑power, low‑latency edge accelerators for physical AI applications.
Speaker: Gilles Garcia
What governance frameworks ensure a human‑in‑the‑loop while scaling agentic AI systems for scientific and national security missions?
He stressed the importance of governance (distinct from regulation) to keep humans overseeing AI‑driven decisions, especially in high‑stakes domains.
Speaker: Thomas Zacharia
How can day‑zero support for new AI models be reliably provided across AMD hardware to avoid vendor lock‑in?
Tim highlighted the need for immediate, out‑of‑the‑box compatibility of emerging models with AMD GPUs, suggesting research into standardized support pipelines.
Speaker: Timothy Robson
What strategies can help startups move from proof‑of‑concept to production (POC‑to‑PO) while managing compute costs and leveraging AMD’s developer cloud resources?
He pointed out the importance of affordable compute, Docker containers, and accelerator programs for early‑stage companies to scale their AI products.
Speaker: Timothy Robson
How can open‑source ecosystems and standards be expanded to support diverse AI workloads (training, inference, agentic workflows) without creating new lock‑in risks?
Tim mentioned the need for continued research into open tools (e.g., Primus, Triton, PyTorch) that keep the AI stack vendor‑agnostic.
Speaker: Timothy Robson

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI for Bharat’s Health_ Addressing a Billion Clinical Realities

AI for Bharat’s Health_ Addressing a Billion Clinical Realities

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how artificial intelligence is being integrated into Indian hospitals, with Abhay Soi describing Max Healthcare’s early digital strategy that built a 15-year patient data lake and now yields a 10-15 % occupancy advantage over competitors [14][5]. He noted that AI is intended to improve outcomes invisibly, creating a closed-loop system for doctors and patients, though early attempts were hampered by language-specific search failures and limited technology [16-18]. Current AI applications at Max focus on task-level efficiencies such as predictive bed-availability analytics, safety monitoring, and automated clinical data capture that frees clinicians to spend more time on patient care [22-27].


Vikalp Sahni asked whether AI adoption has become a CEO priority comparable to earlier digitisation mandates, prompting Soi to acknowledge widespread enthusiasm but also frequent setbacks, especially with ICD-11 tagging where most commercial solutions are costly and unreliable [34-38][40-44]. He emphasized that healthcare AI demands rigorous supervision because errors can jeopardise patient safety and data privacy, making failures a necessary step toward robust implementation [52-58]. Soi illustrated the safety value of AI with an ECG-interpretation example, arguing that an assistive tool could prevent missed heart attacks by prompting earlier admissions even if it does not replace clinicians [113-120].


He further argued that India’s youthful demographic will soon create a massive demand for medical services that existing infrastructure cannot meet, making predictive, AI-driven care essential for future capacity [128-133][139-144]. Dr Gupta highlighted the national ABDM framework that now assigns 860 million ABHA IDs and calls for policies that treat private and public sectors uniformly, thereby providing the digital backbone for AI deployment [184-190][312-314]. Nikhil Dhongari and Jigar Halani stressed the need for context-specific AI, such as voice-first, multilingual models that respect regional variations and can operate on edge devices where connectivity is limited [203-210][218-230][376-379].


Padmini Vishwanath added that building readiness frameworks for remote settings improves provider trust and equity, suggesting a slower, standards-driven rollout rather than layering new systems on legacy ones [318-324]. Tanvi Lall argued that successful AI adoption requires a transformation beyond technology, including education, trust-building, and workflow integration, especially in fragmented primary-care environments [272-306]. Across the discussion, participants agreed that while AI offers low-hanging efficiency gains, its long-term impact depends on institutional culture, rigorous oversight, and coordinated policy support [80-85][52-58].


The consensus was that AI is no longer a peripheral buzzword but an emerging necessity that must be introduced cautiously, with leadership-driven process changes and a focus on safety, data quality, and equitable access [67-71][128-133]. Ultimately, the panel concluded that India’s health system can leverage AI to bridge upcoming capacity gaps, provided that trust, regulatory frameworks, and robust data ecosystems are established first [318-324][128-133][55-57].


Keypoints


Major discussion points


Building a digital foundation for AI – Max Hospital created a 15-year-spanning patient data lake and a “closed-loop” system to feed real-time analytics, enabling predictive bed-availability, safety alerts and automated data capture that frees clinicians to focus on care [14-15][22-24][34-44].


Technical and regulatory hurdles – Early AI projects faced language-search failures, costly ICD-11 tagging tools, and limited off-the-shelf solutions; the team stresses the need for rigorous supervision, patient-safety safeguards and strict data-privacy compliance before wider rollout [34-44][52-58][90-98].


Concrete clinical benefits – AI is positioned as an assist-ive safety net (e.g., flagging high-risk ECGs to prevent missed heart attacks) and as a workflow enhancer through predictive discharge planning and history-taking automation, directly improving efficiency and outcomes [113-118][85-88][22-24].


Strategic drivers and future outlook – Demographic shifts and a looming shortage of infrastructure and clinicians make AI adoption a national necessity; the speakers anticipate dramatic scaling of predictive-health tools over the next three to five years to alleviate systemic pressure [128-145].


Trust, equity and sector-wide adoption – Trust in physicians remains paramount; integrating AI requires culturally and linguistically tailored solutions, robust multilingual voice interfaces, and coordinated policy that bridges public and private sectors while ensuring equity for underserved populations [105-112][218-224][277-284][318-324].


Overall purpose / goal


The panel aimed to share practical experiences and lessons from Max Hospital’s AI journey, examine the broader challenges of scaling AI in Indian healthcare, and articulate a vision for how digital tools-grounded in trustworthy, equitable, and policy-aligned frameworks-can become essential to meeting the country’s future health-care demand.


Overall tone


The conversation began with an upbeat, showcase-style tone highlighting successes and innovations. It then shifted to a more cautious, realistic tone as speakers discussed failures, regulatory constraints, and the need for rigorous oversight. By the latter part, the tone became forward-looking and collaborative, emphasizing strategic urgency, the importance of trust, and a collective commitment to responsibly scale AI across the health ecosystem.


Speakers

Abhay Soi – Representative of Max Healthcare (discussing AI adoption and digital transformation)


Vikalp Sahni – Moderator / Panelist (poses questions to speakers)


Deepak Tuli – Moderator / Panel discussion host


Dr. Rajendra Pratap Gupta – Advisor to the Health Minister; instrumental in defining the ABDM white paper; involved in National Health Policy development and Mayo Clinic strategy in India [S17]


Jigar Halani – Director, Enterprise Solutions Architecture and Engineering, NVIDIA South Asia [S15]


Padmini Vishwanath – Researcher, WHO South-East Asia Regional Office (WHO SEARO) [S23]


Tanvi Lall – Analyst, PeoplePlus (initiative of Aikstep) – discusses AI transformation trends in health, education, agriculture


Nikhil Dhongari – Speaker on ABDM implementation (no specific title mentioned in transcript)


Announcer – Director IT, National Health Authority (leads technical architecture for Ayushman Bharat initiatives)


Audience member 1 – (no title/role provided)


Audience member 2 – (no title/role provided)


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

The session opened with Abhay Soi thanking the organisers and noting that the venue felt like “a micro-cosm of the globe” [1-3]. He began by emphasizing that “Max safety is very much important,” linking the hospital’s strong safety culture to its performance advantage, which he attributed to patient trust and a 10-15 % higher occupancy than the nearest competitor [5]-a benefit he said stems from a long-standing digital strategy rather than a fleeting AI buzz-word [6-8].


Opening remarks and early digital foundation


Max Healthcare’s digital journey started five to six years ago with a common-size data lake that aggregates fifteen years of patient records and is refreshed in real time [14-15]. The goal was to create a closed-loop digital ecosystem for clinicians and patients, loosely modelled on Google’s architecture. Early attempts were hampered by language-specific search failures and a lack of native-language processing capabilities [16-18]. Despite these setbacks, the hospital now leverages the data lake for task-level AI applications: predictive analytics that forecast vacant beds, safety-monitoring tools, and automated clinical data capture through mobile forms, which reduces documentation time and frees clinicians to add value elsewhere [22-27][24-27].


Culture of learning from failure


Soi was candid about the numerous failures that have accompanied the AI journey. He described a culture that welcomes failure as a learning mechanism, comparing it to Edison’s iterative process [34-38][47-48]. He noted that the difficulty of implementing ICD-11 tagging forced Max to develop in-house capabilities because most commercial solutions were ineffective or prohibitively expensive [40-44][45-46]. He stressed that, unlike education where misinformation can be corrected, healthcare errors have life-threatening consequences; therefore, rigorous supervision, patient-safety safeguards and strict data-privacy controls are non-negotiable [52-58][90-98].


Data-privacy legislation


Nikhil Dhongari highlighted the relevance of India’s Data Protection and Data Privacy ( DPDP) Act, emphasizing that data anonymisation is essential for training reliable AI models and for complying with the new legal framework [210-212].


Clinical illustration of safety-first AI


Soi recounted a scenario in which a patient presenting with chest pain had a normal-looking ECG; the attending cardiologist discharged the patient, who later suffered a heart attack. He argued that an AI-driven decision-support tool could have flagged the subtle ECG changes, prompting admission and potentially saving the patient’s life. While he stopped short of claiming AI will replace clinicians, he positioned it as an essential assistive layer that should be deployed first for safety before efficiency gains are pursued [113-120].


Demographic pressure and AI necessity


The speakers linked these technological imperatives to India’s demographic trajectory. Soi warned that the country’s median age of around 29 years will converge with European levels within fifteen years, creating a massive surge in demand for medical services that existing infrastructure and workforce cannot meet. He therefore framed AI-enabled predictive health-identifying at-risk individuals before they become patients, extending home-care, and replicating clinician expertise-as an absolute necessity for the nation’s future health security [128-145].


Policy context and partnerships


Dr Rajendra Pratap Gupta traced the evolution of the Ayushman Bharat Digital Mission (ABDM) from a 2014 manifesto idea to a national framework that now assigns 860 million ABHA IDs to citizens [184-190]. He highlighted that the latest National Health Policy explicitly includes both public and private sectors, breaking a historic barrier and creating a unified digital backbone for AI deployment [312-314][178-186]. Gupta also mentioned a partnership with the Mayo Clinic platform to share anonymised data sets, illustrating how international collaborations can enrich India’s AI ecosystem [250-252].


Trust, equity, and multilingual challenges


Vikalp Sahni raised the issue of trust, noting that patients traditionally place their trust solely in doctors and that the introduction of AI can perturb this relationship [105-112]. Jigar Halani discussed trust in terms of accuracy and language relevance, emphasizing that AI must work reliably across India’s 22 official languages; he illustrated this with a voice-translation example that could convert a Tamil doctor’s speech into Hindi for a patient in Gujarat, thereby solving a long-standing communication gap [218-224][327-332]. Tanvi Lall added that personalisation-particularly context-specific language adaptation-is essential, and that successful adoption requires a transformation that includes education, continuous feedback, and workflow integration rather than isolated pilots [277-285][288-306].


Broader clinical and ethical considerations


Padmini Vishwanath highlighted emerging discussions on AI in palliative-care, stressing the need to preserve empathy, dignity, and the quality of caregiver-patient interaction as AI tools become more prevalent [260-263]. Dr Gupta stressed that unethical prescribing practices, not technology, are the main barrier to wider AI adoption and called for stronger regulation of prescription ethics [270-272]. Deepak cited China’s real-time prescription-error feedback system as a regulatory model that instantly flags prescribing mistakes, illustrating a possible pathway for India [280-283].


Startup ecosystem and actionable tools


Nikhil praised Eka Scribe and other startups that are developing clinical documentation and data-anonymisation solutions, noting their role in accelerating AI readiness [290-292]. He also described the free “e-shift/flight” language-capture solution that enables small hospitals to record and translate patient interactions, positioning it as an actionable initiative for underserved facilities [300-302].


Institutional readiness and cultural change


Soi warned against a “first-mover” mentality, noting that this caution was part of his broader comment on institutional culture; AI rollout must be circumspect, driven by learning and supervision, because the stakes in health are far higher than in other sectors [52-58][90-98]. Vikalp echoed the gap between rapid AI evolution and the slower pace at which hospitals can safely adopt it [89-90]. Both Jigar and Tanvi agreed that the primary barrier is a mindset shift: clinicians need to see AI as a trusted aide, and organisations must embed AI into daily processes through sustained education and trust-building activities [333-337][291-298].


Technical infrastructure debate


Halani explained that most multilingual voice services run in the cloud, but for remote, low-bandwidth use-cases edge deployment may be necessary; he advocated for India-hosted servers to address cost and data-sovereignty concerns [365-371][376-380][387-389]. An audience question about whether a 22-language solution should be edge, cloud, or hybrid prompted him to stress that connectivity is essential for voice AI and that a hybrid model may be required depending on the scenario [381-384].


Low-hanging-fruit AI applications


Predictive bed-availability analytics and automated real-time clinical forms have already delivered efficiency gains by improving patient flow and reducing documentation burden for clinicians, providing a foundation for broader ecosystem-wide AI integration [22-27][24-27][277-283].


Points of disagreement


Soi’s safety-first, cautious rollout contrasted with Tanvi’s call for a holistic transformation that moves beyond pilots to full workflow integration [90-98][288-306]. While Soi highlighted supervision and data-privacy as primary barriers, Vikalp placed trust between patients, doctors and AI at the forefront, illustrating a subtle shift in priority [52-58][105-112]. Soi portrayed the data lake as a mature, real-time resource, whereas Nikhil and Dr Gupta argued that a culture of data sharing is still lacking, with many clinicians still reliant on paper records-a point reinforced by the audience question on Indian versus global data reliance [14-15][418-424][398-399]. The optimal deployment architecture for multilingual voice AI-edge versus cloud-remained unresolved, reflecting differing views on cost, latency and privacy [365-371][376-380].


Conclusion and agreements


The panel converged on six core agreements:


1. A robust, India-centric, real-time data infrastructure is indispensable.


2. Patient safety, privacy (including compliance with the DPDP Act), and trust are non-negotiable.


3. AI rollout must be cautious, culturally ready, and embedded beyond pilots.


4. Early, task-level AI use cases can deliver immediate efficiency gains.


5. Coherent policy and regulatory frameworks (ABDM, National Health Policy, prescription-practice regulation) are essential to steer adoption.


6. Multilingual, voice-first solutions are critical to bridge digital divides.


Action items include continuing Max’s in-house AI development (e.g., affordable ICD-11 tagging, predictive bed tools), expanding the free e-shift/flight language-capture solution for small hospitals, adopting hybrid cloud-edge architectures for voice AI, and embedding education and feedback loops to move pilots into sustained practice. Unresolved challenges-finalising regulatory pathways for AI-driven clinical decision support, achieving a nationwide culture of data sharing, and standardising deployment architectures-were highlighted as priorities for the next year’s agenda.


Key take-aways


– Build an India-centric, real-time data lake that respects DPDP-mandated anonymisation.


– Deploy AI first as a supervised safety layer, with strict privacy and trust safeguards.


– Strengthen regulation of prescribing ethics and data-privacy to unlock wider AI adoption.


– Prioritise multilingual, voice-first solutions and hybrid infrastructure for remote settings.


– Institutionalise continuous education, feedback, and workflow integration to move beyond pilots.


Session transcriptComplete transcript of the session
Abhay Soi

Thank you very much for having me here. at this very, very prestigious event. I just came in from Mumbai in the morning, and what I see over here is, I mean, I think it seems to be the microcosm of the globe, in fact. So thank you very much. Yes, I think, you know, I take all these compliments on behalf of Max, and I think it starts and ends with the trust which is sort of reposed by patients at our hospital system. Today, our occupancy is at least 10 % to 15 % better than the next best player in the hospital system. And, you know, one of the things that I want to point out is, you know, AI seems to be sort of the buzzword, of course, today.

But five or six years ago, when we started our journey, we started bringing digital technology at that point in time to the core. And what you see today, what you experience, and, you know, you mentioned better outcomes, and perhaps… patient services. But that is what you experience. What you don’t see is the technology behind it. And I think that is the true test of technology, and that will be the true test of AI as well. When you don’t interface with technology, but the experiences are improved. Having said that, I think, you know, like I said, we started this journey a few years ago. We started by creating a common -size data lake for all the patients which have been through our doors over the last 15 years, and which are doing so on a real -time basis today.

Having said that, you know, these were our attempts. We tried to sort of create a closed -loop system, like Google, so to say, for our doctors and our patients. But, you know, we, like many people, faced very early, very big setbacks because we didn’t have the technology. Because when we used to do search results, we used to get zero results. in the search engine because it wasn’t sort of native to the language, and that’s stuff that we’ve been playing with. But having said that, I think the early days of AI are going to impact tasks rather than, although one is moving towards institutional, adopting it from an ecosystem standpoint, from inculcating it within the institution, so it becomes an intrinsic part of the institution.

But I think today it’s affecting our tasks. It’s affecting tasks of efficiency. You know, we’ve already started doing predictive analysis of beds which are vacant and available and so on. It’s working on safety measures. I think though the early sort of wins that we have, especially with respect to patient satisfaction of the risks and so on, I think clinical support, you know, it’s data collection, a lot of… time by clinicians was being spent in the past to collect data. now a lot of that data is being collected through forms which are in our apps today. And you can speak to them. It kind of collates in a particular manner. So the clinician actually spends less time in perhaps gathering history than in providing a little more value of the value chain.

Vikalp Sahni

Great. And I think, Max, I mean, as you mentioned, there is this data lake that you have created is quite ahead in terms of digital adoption. And I’m sure when you would be starting, and this is a term that we use and see quite a lot, that adoption of digital in a hospital. So is that what is also happening on top of this large data lake and the EMR solutions that you have created on, for Max, is there like a AI adoption wave that is happening? And do you think, like, when the digital adoption happened, things such as NABH, ABDM, many of these things started coming up, talking about policy, talking about regulation. In AI adoption, are there any challenges, any things that you see that can help in this adoption to be much more faster?

Or you see that people are just going all crazy on getting AI adopted in the hospital settings?

Abhay Soi

I think, you know, there is a desire all across, you know. But having said that, desire does meet reality. I would say more than occasionally. And that comes in the form of failures, which are welcome, sort of. I mean, we quite welcome it, actually, because the more you try, the more you will fail, and the more you will sort of have better outcomes coming into the future as well. So we’ve had a lot of failures, I can tell you. You know, whether it is the longitudinal data of. Patient or. looking at, you know, ICD -11 norms, you know, tagging our data with respect to the WHO. I mean, we’ve been failing left, right, and center.

We’ve been reasonably successful as far as ICD -10 is concerned, but I think 11, you know, most sort of layers that are available in the market don’t work. The ones which work are very, very expensive. You know, so we’ve started, you know, we’ve been in -housing a lot of this. So we’re toying around with it. I have no doubt that the speed at which we are failing and the amount of failures that we have, shortly we will run out of all excuses and failures, and it will be like Edison, right? You would have found out every way to fail, and I think perhaps the only way to succeed will be in front of us. So, yes, there is a lot of enthusiasm, you know, towards adopting it.

We see this as a future. I think everybody does. There is, we, of course, have to be very, very careful. Because unlike, you know, let’s say something like education, where if you’re imparting. perhaps incorrect information, you know, it can be resolved. But this is healthcare. I think patient safety, data privacy, these things are right up there. We have very, very little, you know, standard deviation possible in what we do. And so it requires a large extent of supervision, I would say. And perhaps it will continue to for years to come. Although it makes life easy for most, but, you know, at least from a clinical prescription outcome, it will require a lot more supervision to come as well.

Vikalp Sahni

Sorry. So is it now the priority for hospitals? For example, when the digitization adoption happened, it used to be a priority for CEOs that, okay, you have to make sure that all the billings are online. You have to have all the JCIA. You have to have all the discussions and UHS. ID created and so on and so forth. Is this a priority today, adopting AI at hospitals and as a KRAs for your CEOs or operators? Is that what, has AI reached that level today at hospitals?

Abhay Soi

No, I think clearly, clearly it has. And it’s really out of, I think, two sort of drivers that I find at least, and you know, this is at a very, very, at the outset level. I think one is the way the world, the consumer behavior itself is changing. I mean, a lot of the searches used to happen on Google, a lot of the searches now are happening through different platforms altogether. And the way they sort of seek, whether it’s their thought about your website or, you know, when they’re looking at, if you simply ask which is the best cardiologist in Delhi, you know, there’s a different way of people reading into that early and there’s a different way of people.

So you have to, whether it’s your collateral, whether it’s your digital. assets and so on and so forth. You have to make those changes. Information, I mean, if I look at ESG, if I look at investor ESG, how do I improve my ESG score? I mean, it’s like an encyclopedia out there, right? You just ask it any question, it tells you to do so. How should I present my annual report? How should I present myself? I think, you know, pretty much, you know, it is intrinsic to now everything that we do from that standpoint. Second is how can I use it as a tool to improve efficiency? And these are low -hanging fruits. I’m talking about the low -hanging fruits before we even, I mean, kind of, you know, absorb the entire ecosystem or create that ecosystem or participate in that ecosystem.

Make it a part of institutional habits, right? I think even prior to that, when we’re looking at it as a task stage, you know, how is it that I can improve? Now, you know, if I have a particular waiting for my patients, okay, how can I do predictive analysis of room availability? When I’m looking at discharge, how can I sort of this thing? When it comes to patient summary, how do I get, how do I unlock the time that my doctor spent on patient sort of history and so on and so forth. That all improves efficiency all improves outcomes see eventually the lens we are looking at it from is efficiency is accessibility is safety, is clinical support and finally to the experience I mean it’s quite a bit of breadth that you’re looking at AI

Vikalp Sahni

but a little bit more on generic terms everybody says that technology is moving very fast or things are changing so fast AI is also changing so fast and we also keep doing that like we want our businesses our operations, our sales also run very fast is more your internal feeling how are you feeling? how are you feeling? how are you feeling? health institutions moving as fast as the technology is moving even people in the organization be it doctors be it nurse staff all of them are looking at it and a lot of this is about India AI Summit as well because government is looking for educating people on how fast things are changing and we should all be ready for it so what are your views on your institutional readiness people in your institute and in general on this whole AI moving fast

Abhay Soi

so I think first and foremost also depends on you know the institutional culture we are very clear about one thing that we have to be more sort of circumspect about it than anything else we must go up the learning curve as far as AI is concerned things are changing very very fast we have close to 43 3000 healthcare workers which provide healthcare. You know, that means there are thousands, if not hundreds and thousands of work processes. For us to adopt AI in any task means, you know, you have to change huge amount of attendant work processes, even if this layer sits on top. And having said that, okay, is there something else which is better out there?

Is there something which will disrupt this further and so on? Should we wait for something to be adopted by and large to see, sorry, to see what the efficacy of that is and see what the, you know, see once it’s sort of established before adopting it. To me, look, having the first mover advantage in this is not going to do anything. But getting it right is. I think because we can’t afford to get it wrong. These are human lives, these are people. So I think there’s a huge amount of learning within the organization which is happening and I meet you know phenomenal people people across the board, okay, for various aspects. I think since the morning of today, if I look at my sort of schedule, 30 % of my meetings would be people, you know, from a technology background, pitching various sort of applications where our lives can be improved and outcomes can be improved and efficiency can be improved and so on and so forth.

But, you know, at the very least, you have to be very, very circumspect about what you’re going to adopt and what you’re going to roll out.

Vikalp Sahni

No, and I think you touched a very important point that we learned at Eka. We were earlier did a travel startup, me and Deepak. What we realized that in health, there is obviously innovation that people are looking for, but trust is the most important thing. I can bring a cool idea or there could be a cool way of doing a diagnosis at a clinic, but I as a person would trust only the doctor that I have spoken to or I have spoken to. been talked about. So there is this and that’s the reality that we learned when we started doing health that yes, innovation is definitely important, but trust is key. And I think Max has been trusted over the years.

And to be very honest, we also don’t know how to balance that out. The trust that has been created for institutions, for doctors, and now these technologies that are coming in, where it asks questions to the patients and gives relevant next suggestions. This trust factor is kind of getting a little sort of changed. Any views that you have, especially when it comes to patients, people trusting doctors to AI to institutes, any change that you see, and even doctors looking for AI solution and whether they feel that this is right now not as good?

Abhay Soi

So, you know, I’ll give you one example. At most hospitals, at least once every couple of months, you will have a patient who will come in, okay, with a pain in the chest. You do the ECG, and the ECG to the doctor seems sort of normal. He speaks to the cardiologist, okay, and the cardiologist says, okay, I don’t see anything wrong with it. And the patient is sent home, and he has a heart attack, right? Because ECGs, although they’re extremely, extremely common, okay, can be very, very nuanced. So, an expert cardiologist, okay, may be able to catch a particular movement there, okay, while somebody else may not. But even the expert cardiologist on a good day may be able to catch a particular movement.

able to catch it on a bad day not right now I’m not saying AI in its sort of this thing is complete but when a patient comes to ER okay I think it’s absolutely necessary to use that tool okay because that tool says requires admission okay whether the patient doctor sees it on admit him okay look by the end of the day you may admit 150 instead of 100 actual patients but don’t let that one go I think that’s the important thing you’re able to if you’re able to use this as assistive tool to augment your capabilities okay and I think that is what is emerging today you know I think it’s little too far out to say whether it will replace the clinician or not okay but I think right now clearly that is a very very essential tool that you can use and let’s start with safety before we go to efficiency or anything else you know so I think a very simple example like this okay and it depends it starts with leadership moves to institutional sort of habits okay to be able to adopt something like that change your work processes because the umpteen amount of work processes which have to change okay doctor when a patient comes where do you move when this sort of ecg report you move him to the cath lab okay and which is a 13 minute sort of this thing but that’s also preparatory time right you’re doing it within the golden hours you sort of move him into uh you know the icu how do you sort of interact with the doctor you have to call the doctor let’s say it happens at three o ‘clock at night okay the doctor the cardiologist has to come from his home and so on and so forth so the entire dance starts right okay but you have to make sure that you know you you can use this tool to err on the side of caution but i think at the very least that’s what you need to

Vikalp Sahni

and i think you touched upon um like these this complex healthcare process and uh when we look at it from a technology perspective this is what ai can solve for these extremely complex process that today there’s a multiple human touch point very simple such as doing an emergency call to a specific doctor with giving all the respective which today can be optimized, which can save lives. So that’s the sort of things that we keep discussing about during our board meetings and discussions. But a lot of these, and maybe health and non -health as well, what’s your view how the next five years, next six years, yesterday there was this conversation with Sam where AGI will be there by 2028.

What is your view on how next three years to five years? Now we can’t even say a decade, right? It seems like we don’t know what all will happen in a decade. But how do you see next three to five years changing in your hospitals or in general health care?

Abhay Soi

I think dramatically. Adoption of, and it’s not because hospitals or health care providers desire it, I think it’s becoming… absolute necessity for the country. One of the things, and perhaps one of the major things that propels our country forward is the demographic dividend. You know, the average age is 29, 28, 29, whatever. But make no mistake, 15 years down the line, it will be very, very close to the European age. And that’s the time people will require medical intervention. There just isn’t enough infrastructure and doctors available in the country. Okay, barely, barely sort of, it’s actually not even enough for the population today. I can certainly tell you 15 years down the line, there isn’t enough infrastructure which can possibly be built.

There isn’t enough money over here. Or, I mean, we’re just behind the curve a little too much, right? And if we have to solve this equation as far as healthcare is concerned, you know, you have no choice. But you have to, it has to be about predictive health. It has to be about, you know, sort of, before even patient comes to the hospital, falls sick, to be able to predict that he’s going to fall sick. and make amends there. Reaching out to people, unclog the hospital infrastructure, home care and so on and so forth. Okay, be able to replicate capabilities, skill sets of doctors to be able to take them to patients and so on.

I think all of that is a necessity. Without that, we will fail the future generation. So there’s no question of us. I think, you know, this is here. I mean, the future is here today.

Vikalp Sahni

And especially the whole vision of making India a developed country, we have to leapfrog. And many of these technologies can help us in leapfrogging the way you were explaining. But thank you so very much, Abhay, for your deep insights. I think we all love Max and the kind of work that you are doing. And we see more and more AI coming together at Max to solve for doctors, patients, and all of us. Thank you very much.

Abhay Soi

That is entirely mine. Thank you. Thank you so much. Thank you. Thank you.

Announcer

as Director IT at the National Health Authority where he leads the technical architecture and implementation of flagship national initiatives including the Ayushman Bharat Digital Machine and Ayushman Bharat PMG. We welcome you sir. We have with us Ms. Padmini Vishwanath, Researcher at the WHO SEARO, Southeast Asian Regional Office, bringing a regional lens to health equity, digital health policy and evidence -based transformation in low and middle income countries. We welcome you Ms. Padmini. And last but not the least, we have Mr. Jigarth Halani, Director, Enterprise Solutions Architecture and Engineering at NVIDIA South Asia, a 20 -year technology veteran driving innovation in supercomputing, big data and AI infrastructure and a trusted advisor to government and industry on AI strategy.

I now hand over to Duy. Deepak to lead the panel discussion. I think we are short of space, so I’ll manage to standing high.

Deepak Tuli

Thank you very much. It was a great session, Vikalp and Abhay just left. We have a short of time, so I will try to leave maybe 5 -10 minutes at the end for everyone to have questions. I would like to start this session with Dr. Gupta. Dr. Gupta, we were talking last night. You were instrumental in defining the whole first white paper around ABDM, how did it all started. There is obviously a lot of progress from when you conceptualized way back in 2019 -2020 to today. What do you really think has really worked towards seeing the reality and what are the challenges? How do you see going forward? How do this whole documentation moving from? Between patient and interoperability will start impacting the clinical decision making for the physician.

going forward.

Dr. Rajendra Pratap Gupta

Thank you Deepak and thank you Vikal for this wonderful session and giving me the opportunity. So it started actually in 2014. It was in BJP’s manifesto where I wrote and then in National Health Policy in 2016 and eventually when I was advisor to Health Minister. So you know, firstly we should compliment the ABDM team. There is no precedent. There is no precedent to create records for a billion population. How do you go about doing it? But I think people like Vikal and you every time you know you take a bold step, there are nurses who will say Bijli nahi aati, aap kese karoge. Today we have 860 million ABHA IDs. So I think if I look at the reality today and I know I am sitting on the right of the Director IT.

We have created the digital infrastructure. Now we have to leverage that to empower the people who are going to use it. I see a future where we will not have people using multiple schemes. That was our biggest problem internally. I can tell you why this got you know created and there are more reasons to but eventually technology will allow us to optimally use resources to clinically be precise in treating people remove redundancies and also my boss who is still the unit health minister we agreed fundamentally it will be tough to send doctors to relay they study for 12 years to make their life better not to go of course we want them it will take time to have that infrastructure where they can stay in rural areas but we believe that digital health digital solutions will be able to leverage this backbone that we have created to serve people in the areas where they need them the most I think that golden hour to platinum minutes to I think finally what I believe will be the digital health standards that in a minute you could get to what you need at least for primary care so I am very optimistic and we call was right the decade is not we used to talk decade at 2013 -14 now we talk three years max few months is better to talk So I think it’s a time where we should be really optimistic of the vision that we were able to build thanks to people like who had implemented.

You know, we had COVID at our hand, you know, when we looked at 2 .2 billion people, you know, getting vaccinated, not calling up people, just going to the app and getting it done. So I think the creators are in the room. The implementers are in the room. So ideators don’t need to worry much. Thank you.

Deepak Tuli

Thank you very much. That’s very nice. So moving to your left, Nikhil. Nikhil, you guys have done a phenomenal job in, you know, deploying adoption of ABDM in public sector today. But we see a lack in the private sector, definitely. What have you been learning? How do you think this whole learning from ABDM deploying at a large scale PSU where obviously there is obviously massive, massive load of patients walking in and very limited physician and staff to support. Digitizing appointment has done a great way. do you see it going forward moving into private sector how do you see it going forward getting into the workflows even deeper which will really help better outcomes

Nikhil Dhongari

which can be developed by IKAK and other health startups. Where ABDM created the federated architecture, where the model can go there and they can be tried. Because the simple making algorithm doesn’t make a solution in the health sector. Just now as the max safety is very much important. So what is missing in the foreign models, which is not tried in the Indian data, especially we can’t neglect the population, the rural population, small hospitals where most of the people go there. Where ABDM created HMI solutions, where we have access to the longitude records of the patients, where our Indian models can be tried, and where we can get the success actually. The ground is fertile enough right now to pitch in for the Indian startups to come in and try.

in your models especially because of federal drugs you don’t need LLMs you just need SLMs and some random models to come and do it so that the model cannot be biased because the biasness I can’t see only from the technical angle here you have to keep both clinicians and technologies so that the context data is available from across the India and across the population where the subject is the billion clinical realities and where and the AI model should be not only transactional they should be conversational where the literacy rate is very low so now is a fertile ground for the Indian startups to come in and show the brand value of the Indian startups so where I can see it.

Thank you.

Deepak Tuli

Thank you very much Nikhil insightful you touched upon cloud infrastructure and we have Jigar here. So Jigar cloud infrastructure has made AI scale. We all are using, everyone is using chat GPT today. Infrastructure, sovereignty and trust are hot things. We’ve been hearing about these words for last five days like I don’t know n number of times. It becomes super, super relevant for health as we heard in the last session, the trust. How do you think the models or the companies building in India bring that trust factor so that physicians and the operator like Abhay would trust the solutions and then start implementing which will really help, you know, people like Nikhil in building those models for the country.

Jigar Halani

So I think it’s a deep question. Trust has many aspects if you ask me honestly, right. Trust in my language could be the most accurate results and I’m happy because I’m a fast moving guy, IT professional, right. So we are known for it. The event gets over today and Monday we are going to do it. We are back to work and we know we are going to hog again for next five days to make sure that we are something better and bigger right trust for my mom could be a very different storyline right because for her everything on priority line is health nothing else essentially right for me plus or minus 1 % 3 % 5 % 10 % is also okay for her nothing is right and trust for a mother who has just a newborn in her hand it’s gonna be completely different right so it I personally feel it has many years but a fundamental layer if you ask me honestly model builders what they’re trying to do is trying to still accumulate the knowledge which is still available on the web right what we haven’t gone back and that’s where I would borrow if India is achieving these numbers which I was not aware I knew about pretty well I have it from myself as well although I have not used it yet but a but I am a registered user I won’t myself get enrolled into everything but I am a registered user I won’t myself get enrolled into everything scheme that government is coming of it just to make sure that to understand where all connectivities possible essentially right but I’m not even a willing one yeah but but but I’m saying that once we have this data how do we make use of this data better so that I bring not just the context of India which is so important and what a couple is trying to do just on the language side of the story which we all understand that language is so important to us but imagine the you know the the the the environmental changes that I have from you know place to place and basis which the changes that I have in my body structure and basis which what medicine helps me better and so on and so forth right it has its own subsequent you know chain of things that that is it how do I bring that data more into the ecosystem thereby I make those model more and more efficient better and in the lingo of what India understand not just in language but also in the lingo of health which is important acting that particular like for example I come from Gujarat but I stay in back Bangalore, I know for sure that environment is not suitable to me, right?

And I keep sneezing for the poll reason, of course, many of the moment I go back to Gujarat, I’m absolutely normal, right? Whether it is extreme cold, whether it is extreme hot, it’s raining, doesn’t matter at all. I never sneeze. I think things work in Gujarat. I go, I come daily, I don’t get sneezing at all. I’m just another example, right? So how do I bring that data more into the ecosystem, number one? And then number two, how do I train those models more efficiently and serve them back to the users? So that’s one aspect of it, right? The second aspect of it is I think unlike language, in the healthcare, we need a very large momentum of citizens to participate and help us to have a lot of feedback ecosystem in what they are pursuing from these models which they are inferring it.

Right? Thank you. like, for example, in your solution, which I’ve seen the demo because now it’s been a number of times in the demo booth I’ve seen this. If a patient is talking, right, and, you know, going through your recordings, let’s say, which he or she has just done, for him or her, it’s the most important thing. For the doctor, it’s like the next patient, right? How do I go to the next? But the patient will definitely go back and check the recording. Patient will definitely go back, as we all do, and for the rightful reasons. We check the second opinion with the doctor, right? But that information is only with me. What did the second doctor told me, right?

I check with you as a doctor, and then I say, all right, it’s a big operation. I should take a second opinion. And I go to her. I take another opinion, and then both they say the same thing. And I then still Google, right, and I take the opinion there. And I say, you know what? Looks to be that I need to get operated. But we’ll wait. And then five days. It’s free consulting. Four days later. will come to the doctor. There are four questions. So I think the user also need to put the feedback back in the ecosystem by using these models and then getting democratized. I think that’s how the trust layer is growing.

This is at a very high level. Policy level, things are going to be very, very different and I’m sure it’s a topic by itself. Some other day we’ll work on it and I’m

Deepak Tuli

Thank you very much. This Google doctor has been very, very popular in clinics. When we meet a doctor, they hate it. I have seen a board many, many times outside the physician’s cabin. No Google doctor, please. Okay. So we discussed, so next question to tell me, we’ve been talking about private hospital infrastructure. There’s a mass of high quality infrastructure available in the country with really great physicians. On the other side, we have PSC, massive pressure, less number of physicians. How do you think, you know, builders, when they think of building solutions for both of these perspectives? Should they think of a single solution? So I think the answer is yes. So I think the answer is yes.

So I think the answer is yes. What do you think of two different solutions? What do you think how it’s going forward?

Tanvi Lall

Yeah, so at PeoplePlus, which is an initiative of Aikstep, we do a lot of analysis on what are the adoption trends and for high need populations. So basically for people who are building in healthcare, education, agriculture, right? What’s the uptake? Who’s building what? Who’s not taking third -party solutions and trying to build internally? And there are a couple of points that have emerged in that thesis. I think the first is because AI is meant to be personalized and context -specific. It can deal with multilinguality and voice. There’s firstly a lot of opportunity to bridge some of these inequity gaps that exist. So the first thing is that today you can imagine as a builder solutions that are in some very, very regional, low -resource languages for the different beneficiaries.

And you can design them to be voice first, which in a way is inducing trust because now they are speaking to someone and they’re just not reading or… an answer and they don’t know who’s behind that solution. So the first aspect is that AI is meant to be personalized. So when you’re building solutions or, you know, I’m going to go a step further. I’m going to say it’s beyond a solution. It’s a transformation. You can create very customized transformations. That’s number one. I think the second thing over here is that when, you know, because it’s a very fragmented value chain, right? In the case of healthcare, like someone is paying, someone is using the technology, someone’s ultimately benefiting from it.

What we’ve realized is when you’re designing these transformations, a big part of a builder’s journey is not just making the tech stack, but spending time with people who will be adopting it, educating them at different levels to explain how this tech could get consumed or improve their life, right? So many times, and I mean, there are 700 plus healthcare startups in India who are doing all kinds of pilots and demos right now. And what we’ve realized… is that the demo phase goes really well. Like three months, six months. Because there is adopters who could be hospitals or other institutions sometimes play from a place of either fear or hype. Like I want to be aware of what’s going on and I’ll do the demo.

But after three months, this is just going to be a side window on my browser which I never go back to because it was never thought of as a solution that I would embed into my workflow. So you have to think from the start of this as a journey and not just a one -time switch. That I get that one -time contract or that one -time demo and imagine that that will convert into some kind of impact. Now to build that trust, it’s very different in a private hospital which is maybe much more urban, much more aware of what’s going on versus a PHC, right? And the people in the PHC. So I think a tech stack and the solution is one piece of it.

But when you’re designing the transformation which comes with education, awareness, trust -building activities, creating safety and maybe feedback and evals that may be a little bit more make sense for a PHC versus a hospital which might be very different. So when you’re thinking of the… Transformation stack, it has to be very different. And transformation is about much more than tech. And I think that’s where people should be spending a lot more time as builders. It’s just not about cracking that first pilot or that first deployment, but saying what will it take to go from pilot to population scale, right? What will that take? Because that is a very different journey. That’s a systems journey. That’s not a tech journey always.

Deepak Tuli

No, that’s super helpful. Continuing the same discussion, Dr. Gupta, when you look at policymaking, do you look at these two segments very differently or you think when you look at policy like private sector, PSC, you think health is one single sector or do you start defining, saying, okay, how will it work in public, how will it work in private sectors?

Dr. Rajendra Pratap Gupta

So if you look at the national health policy, this is the first time where I actually wrote the line both for private and public sector. In 2002, it was mostly written and even implied that it was only meant for public sector. I think if you really want to deliver care, you have to break that. barrier between private and public that’s how you will deliver care when patients has a problem it doesn’t see whether is the private or public should I get it the first hospital they get it so I think that was the thinking behind it and that’s what the policy is like

Deepak Tuli

oh that’s great learn something I move on to Padmini from your regional vantage point how should AI system be designed differently to reflect the diversity of context capabilities and care relative across countries

Padmini Vishwanath

yeah thank you first of all thank you so much for having us today WH was very glad to be representing the work that we do and you know listening to my co -panelists it’s interesting to hear about you know the the importance of tailoring because interesting and a little how do I say anxiety in using for me because the you know the work that we do is on the other end of the spectrum which is how do we create norms right how do we create norms and normative guidance to ensure that AI is equitable and moving in the right direction. And so I’ll talk from the regional perspective. And, you know, of course, we work with eight countries across CRO, and all of these countries and systems are at very, very varying levels of digital maturity, right?

But what we often find is the AI tools are developed for the most advanced, most connected tertiary institutions. And then adapted later for the most more remote settings, right? But we are finding that some of the countries, you know, pilots are looking at reversing this logic, which means that we start developing readiness frameworks for the most remote settings, understanding the frontline capabilities, device availabilities, all the factors that matter in AI readiness. You know, developing a framework, a framework for that level of remoteness. and then scaling it. And we do see that in contexts where we do that, there is higher provider trust, there is more equity. So I think that is, from our experience, we feel that maybe we need to slow down a little bit and look at how we can modernize existing legacy systems rather than kind of building on and adding new systems.

Yeah, I’ll stop there for now.

Deepak Tuli

Please, Annie. So maybe starting from here.

Jigar Halani

I’ll go first. I think voice. It’s a common factor. I think it is horizontal, not vertical, but it’s very, very important for the country, right? If I understand what Tamil doctor is speaking with the patient and convert it into Hindi and have that deployed in Delhi and Gujarat, I think I’m home essentially, right? I’m solving many problems for years together that has been prolonging in the country essentially. I think it’s by itself is a reward to the country and we should be fully liberating it. One thing that I’m very happy about is the mindset change. You know, that’s going to be the biggest thing. It’s not a technology problem. It’s a mindset problem. And that’s what I’ve seen every single person, you know, talking to, they started more believing in the fact that, you know, the time has arrived.

Nikhil Dhongari

I will tell two things. One thing is product. So I am very happy that a lot of discussion is going on about AI. So for any technology, anything to encourage the public, the thought process is very much important. That impact submitted created that impact to discuss things on AI basically. Now everyone will discuss on AI, like you beat a very rickshaw puller to the CEO, that discussion is happening. That is very much important to build systems, that thought process. Second thing, I visited few of the special start -ups. So very happy to see some start -ups are doing really great, like Eka scribe. So where the TVDM can use basically small clinics to reduce the burden of the clinicians from the non -clinical worker work.

So and there is one company and they are doing very great work on data anonymization. Because for many people, they have models to train after the advent of the technology. DPDP act so the data privacy and patient consent is very much important so they are working very really good in India so I’m very happy to such companies are there and they’re doing really wonderful so I’m very fed

Tanvi Lall

I think for me it would be the emphasis on AI ready data systems because this is across sectors everyone is realizing that AI is only as good as the data for the model and application layer that it has access to and I really want to give you guys credit for that because you are pioneers in terms of putting out data and making it available to that MCP server that came out in fact we cite that as an example we are working very closely with Mosby right now and they want to make their statistical data sets available to the world they have put out their first MCP about a week ago but just the fact that you are you know institutions are just not extractive when it comes to data but they want to give it back so that others can build on top of it is very important so in health it’s crucial that happens because otherwise there is no personalization happening.

Padmini Vishwanath

So I would say I think so far we have looked at a lot of quantitative measures of adopting AI in health, diagnosis, accuracy, number of patient visits, et cetera. But I think this time around we are seeing more discussions around the qualitative dimensions, right, empathy, dignity, care. And, you know, it’s interesting because in one of the pilots we are conducting on palliative care, we, you know, we didn’t even think about it, but a caregiver and a patient, palliative care patient visiting a nurse, you know, that’s their only source of human connection during that week. And so how does AI kind of change that dynamic of caregiving, right, in those little moments they spend together in the clinic?

So I think. The increased conversation around this. and just acknowledgement of not just the quantitative but also the qualitative dimensions is something that I’m personally really looking forward to.

Deepak Tuli

Thank you. The objective of this question was not to get the promotion for Eka, just a disclaimer. I will… No, but thank you. This was super insightful. Audience, any question?

Audience member 1

Sir, I just had a question. You said a voice language is not as new. So that mostly 90 % of them are on the cloud. So that needs to be on the edge only or on the cloud or hybrid?

Jigar Halani

No, no, of course. I think… Do you use ChatGPT? Yes. One of the servers I hosted over in India. No, I’m just saying there’s cost factor is also there and they have data privacy also there, so… The moment you add cost, as long as it is in India, I think we are home. I don’t think so we could be… Ever cheaper.

Audience member 1

So I was just… I’m asking a suggestion from you, so like what model should, like someone who’s creating such solution for voice and translation, multilingual, let’s just target 22 languages. So where should the MCV or the influencer or the activity server should be hosted? On the edge, on a gadget, like a mobile phone or something, audio recorder, or, you know, hybrid?

Jigar Halani

I would say, I would say it depends on the use case. If you have a very particular use case, very tiny one in a remote place, edge would be the solution. You don’t have a choice. You will lack behind the connectivity and few other aspects as well.

Audience member 1

It will synchronize once a month or once a week or once a day?

Jigar Halani

No, voice is something you need to have connectivity in play.

Audience member 1

Okay.

Jigar Halani

You can’t be having offline things. That’s my view at least. People are trying. I think Saboom had something on, on the device. 90 % we should go for.

Audience member 1

Connecting with the cloud or the server?

Jigar Halani

Yes.

Audience member 1

Even if it’s a local India hosted server?

Jigar Halani

That’s correct.

Audience member 2

Hello everyone, we have seen a lot of stalls in the expo showing AI powered documentation and diagnosis. I am a dentist and I am currently pursuing MBA in analytics. So I am curious how far this AI, Indian based AI tools are relying on Indian data rather than global data sets.

Dr. Rajendra Pratap Gupta

Depends on what they are claiming that’s first. And the other side I also represent the Mayo Clinic strategy in India. So as Mayo Clinic platform we are opening in partnership with some of our data sets to look at but also collaborating with hospitals inside to leverage each other’s anonymized data sets. So I think important. Thank you. point to note is the culture of data is missing. I mean, we still have to get the culture of data to get to have those AI systems that are based on Indian population. I think this is still far away. I think with ABDM sitting next to we have 860 million of IDs, but if the number of records on ABDM, if you check, they are not what we want to be.

So I think we’re still not there in terms of if someone makes a claim, be careful. Thank you.

Deepak Tuli

That’s great. We talked about what we really like, what we’ve seen the change fundamentally. But do you guys also think there are still few items we’re lacking behind as a country, as a health, where we should have been already seen? Or you think we are on the right path? And if we are on the right path, then what do you think would be the great outcomes in next year?

Dr. Rajendra Pratap Gupta

My answer is very frankly, even at the cost of obesity. See, the issue is not about the usefulness of technology or the use case. It’s about ethics and doing the right things. Most of the people are not using, not because the UX, UI, technology, outcomes, everyone knows that. How many doctors would actually want to tell what they charge for a prescription, how many prescriptions they may, and why they write three antibiotics for one case. So I think it is about regulating that unethical part, the way they were able to crack it, you’ll be seeing the mass adoption. The challenge lies in the medical practices and medical ethics, not on the solutions per se. Otherwise, we would be the most adopted nation in terms of digital technologies.

Deepak Tuli

The great, last night, we were having this conversation where in China, I was surprised to hear this, that in the real time, when a physician, is writing a prescription there’s a data going back and if there are errors it’s coming back and the doctors are getting flagged if they continuously do this thing and then that’s the way one way of controlling what you really said and you know think of us we are still in metros literate but think of people in tier two tier three having three antibiotics at the same time i have seen in bombay a chemist saying as you throw my religion yeah we pop up so i think it’s an issue about practice medical practices good good pharmacy practices good prescription practices to follow i mean you could have given a cold and a cup syrup that would have made him money too

Nikhil Dhongari

i want to give as if you had a point just want to add and she asked one question how many models are training on indian data so you said now that where we are lagging behind so we want to say like the behavioral change is very much important so we have solutions even And we gave very, with CDAC, we gave one e -shift flight, which is almost free to small hospitals. And all the government hospitals, including Ames, having the HMI solution where they can create the language records. But some of the docs are not ready to do, because they said that we are very much accustomed to writing on the paper only. So still they are doing, and we are accepting.

So where we are losing the context data from the major public hospitals. So where we need some tough stance, because I am now working in National Tadati, but before I was in Railways. Now Railways totally stopped physical prescription. Because they took a decision that no more physical prescription. They are doing only now online prescription, everything, even lab record, everything integrated. So they took one decision. They retested. So we need some tough decisions, and also we need some behavioral change, where we have to go for creating language records. then only we can give some context data to the Indian startups where our models can be deployed and trained then we can get rid of it.

Deepak Tuli

Thank you very much you have been a great panel thank you very much for all your insight and I am sorry in the interest of time we will have to wrap up but before we close this session a sincere gratitude and thank you to all our panelists I request Deepak to just present a moment to from our behalf thank you Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (17)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Max Healthcare enjoys a 10‑15 % higher occupancy than its nearest competitor, attributed to patient trust.”

The knowledge base states that patient trust is the foundation of healthcare success and that occupancy rates are 10-15 % better due to trust [S1].

Confirmedmedium

“The organization embraces failure as a learning mechanism, likening it to Edison’s iterative process.”

The source emphasizes embracing failure as a learning opportunity, matching the reported cultural stance [S108].

Confirmedmedium

“India’s demographic dividend makes AI adoption in healthcare an absolute necessity.”

The knowledge base describes AI adoption as becoming an absolute necessity for the country, driven by the demographic dividend [S2] and [S24].

External Sources (108)
S1
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — – Dr. Rajendra Pratap Gupta- Nikhil Dhongari
S2
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Speakers:Abhay Soi, Dr. Rajendra Pratap Gupta, Nikhil Dhongari, Tanvi Lall Speakers:Dr. Rajendra Pratap Gupta, Nikhil D…
S3
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 1- Founder of Corral Inc -Audience member 6- Role/title not mentioned
S4
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S5
Global Perspectives on Openness and Trust in AI — – Karen Hao- Audience member 1- Audience member 5
S6
Global Perspectives on Openness and Trust in AI — -Audience member 2- Part of a group from Germany
S7
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S8
The Arc of Progress in the 21st Century / DAVOS 2025 — – Paula Escobar Chavez: Audience member asking a question (specific role/title not mentioned)
S10
Transforming Health Systems with AI From Lab to Last Mile — – Vikalp Sahni- Richard Rukwata
S11
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — – Abhay Soi- Padmini Vishwanath – Vikalp Sahni- Abhay Soi
S12
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — – Tanvi Lall- Padmini Vishwanath Tanvi Lall argues for different transformation approaches for different settings (priv…
S13
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Deepak Tuli acknowledges Dr. Gupta’s instrumental role in defining the first white paper around ABDM, noting the journey…
S14
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — -Deepak Tuli- Panel discussion moderator
S16
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Jigar Halani- Nitin Gupta – Peter Panfil- Jigar Halani- Sanjay Kumar Sainani
S17
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — -Dr. Rajendra Pratap Gupta- Advisor to Health Minister, instrumental in defining ABDM white paper, involved in National …
S18
DC-DH: Health Digital Health &amp; Selfcare – Can we replace Doctors in PHCs — – Rajendra Pratap Gupta: Chairman of the board for HIMSS India, moderator of the discussion Rajendra Pratap Gupta: Fan…
S19
Conversational AI in low income &amp; resource settings | IGF 2023 — Rajendra Pratap Gupta:Thanks, Ashish. And I think the point that you raise is very important. The Dynamic Coalition for …
S20
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Giordano Albertazzi — -Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned
S21
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned
S22
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned
S23
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — – Abhay Soi- Dr. Rajendra Pratap Gupta- Padmini Vishwanath – Abhay Soi- Jigar Halani- Padmini Vishwanath
S24
https://app.faicon.ai/ai-impact-summit-2026/ai-for-bharats-health_-addressing-a-billion-clinical-realities — But I think today it’s affecting our tasks. It’s affecting tasks of efficiency. You know, we’ve already started doing pr…
S25
Responsible AI for Children Safe Playful and Empowering Learning — “safety, privacy, these are absolutely foundational and non‑negotiable as we’ve seen on the LEGO education side and simi…
S26
WS #162 Overregulation: Balance Policy and Innovation in Technology — Natalie Tercova: Of course, I’ll try to be very brief. So I very much agree that it very depends on the specific case…
S27
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — Thethroughput optimization platformaddresses operational efficiency using ambient systems for automatic data capture, en…
S28
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — Dr. Reddy emphasized extending AI healthcare solutions beyond urban hospitals to rural communities through mobile van pr…
S29
Future Network System as Open Platform in Beyond 5G/6G Era | IGF 2023 Day 0 Event #201 — Abhimanyu Gosain:I get the easy question here. So it’s artificial intelligence and machine learning, right? So that’s so…
S30
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — See, under the remit of the mandate given to the Reserve Bank of India, under the Reserve Bank of India Act or the Banki…
S31
Building the Next Wave of AI_ Responsible Frameworks & Standards — This comment addresses a fundamental tension in AI deployment – the mismatch between probabilistic AI behavior and deter…
S32
Agentic AI in Focus Opportunities Risks and Governance — I was just going to say ditto to everything that Danielle said because that’s basically what I was going to say, and she…
S33
Transforming Health Systems with AI From Lab to Last Mile — Vikalp Sahni identified key technical challenges including building systems that work across multiple languages and gene…
S34
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion — And then making rules of the road that are applicable to that specific context. I think that’s really crucial. The other…
S35
WSIS Action Line C7: E-health – Fostering foundations for digital health transformation in the age of AI — ## Background and Context ### Capacity Building and International Support ### Technical Architecture and Building Bloc…
S36
Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023 — Geralyn Miller:Yeah, thank you very much for the question. So I want to respond to in this context to some of the commen…
S37
Rethinking Africa’s digital trade: Entrepreneurship, innovation, &amp; value creation in the age of Generative AI (depHub) — On a positive note, it is argued that ethical considerations and human rights protections should be prioritized in the a…
S38
From principles to practice: Governing advanced AI in action — – **Implementation Challenges Across Jurisdictions**: Participants highlighted the tension between rapid technological a…
S39
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S40
Advancing Scientific AI with Safety Ethics and Responsibility — The panel ultimately argued for a “web of prevention” approach where multiple complementary measures work together rathe…
S41
Setting the Rules_ Global AI Standards for Growth and Governance — Hello. Jocelyn, Google DeepMind, where I also work on issues of AI standards, governance, and policy. building on what’s…
S42
Panel Discussion AI in Healthcare India AI Impact Summit — Chris Ciauri provided concrete examples of AI applications already showing results. Banner Health’s use of Claude to sum…
S43
Practical Toolkits for AI Risk Mitigation for Businesses — Nusrat Khan:Thanks, Sarayu. Good morning, everyone, and thank you for being with us. My name is Nusrat Khan, and I work …
S44
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Amal El Fallah Seghrouchini:Hello, everybody. I am very happy to talk about AI in cybersecurity. And I think that there …
S45
Scaling AI for Billions_ Building Digital Public Infrastructure — Discussion point:Future Outlook and National Implications
S46
Shaping the Future AI Strategies for Jobs and Economic Development — Discussion point:Infrastructure Challenges and Energy Requirements
S47
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Tatjana Titareva: Thank you, Alex. I would like to say indeed that part of the roadmap is the need for capacity building…
S48
Building Inclusive Societies with AI — Kumar advocates for strong partnerships between public and private sectors to drive national development. He emphasizes …
S49
Ateliers : rapports restitution et séance de clôture — Aurélien Macé Apparemment, j’ai droit à 6,6 minutes, deux fois plus que les autres, ce qu’on m’a dit. Le thème de vendre…
S50
AI as critical infrastructure for continuity in public services — Human factors & adoption barriers Human factors such as fear of replacement and communication style are major barriers …
S51
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Soi emphasizes that healthcare AI implementation must be extremely cautious because unlike other sectors like education …
S52
Safe and Responsible AI at Scale Practical Pathways — The panel revealed that making data AI-ready is fundamentally a governance challenge rather than merely technical. The a…
S53
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Voice technology and multilingual capabilities were highlighted as crucial horizontal solutions for healthcare AI in Ind…
S54
Panel Discussion AI in Healthcare India AI Impact Summit — “One of the big barriers is multilingual.”[1]. “Maybe use cases, and I briefly hit on this before, but I think certainly…
S55
Panel Discussion AI in Healthcare India AI Impact Summit — Maybe I’ll do the risk first, and then I’ll talk about a few use cases. And by the way, thank you for the comments that …
S56
WS #162 Overregulation: Balance Policy and Innovation in Technology — Tercova emphasizes that patient privacy, data protection, and minimizing bias in algorithms are non-negotiable aspects o…
S57
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S58
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Aparna emphasizes that basic privacy and security requirements are non-negotiable foundations for any AI system. She arg…
S59
The mismatch between public fear of AI and its measured impact — Inmedicine and science, AI has shown promise in pattern recognition and data analysis. Deployment is cautious, as clinic…
S60
Adoption of agentic AI slowed by data readiness and governance gaps — Agentic AI is emerging as a new stage ofenterprise automation, enabling systems to reason, plan, and act across workflow…
S61
How AI Drives Innovation and Economic Growth — Rodrigues emphasizes that while early AI discussions were dominated by fear about job displacement and technological thr…
S62
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — Arguments:Local processing is preferred for enterprise security and compliance requirements Breaking down problems into …
S63
Trusted Connections_ Ethical AI in Telecom & 6G Networks — Arguments:Edge vs cloud decision-making should prioritize data privacy, user privacy, and responsiveness for edge proces…
S64
Tariffs and AI top the agenda for US CEOs over the next three years — US CEOs prioritise cost reduction and AI integration amid global economic uncertainty. According toKPMG’s 2025 CEO Outlo…
S65
Building a Digital Society, from Vision to Implementation — Stacey Hines, joining from Vancouver at 4 AM Kingston time, cited research from Web Summit where AI expert Gary Marcus p…
S66
AI as critical infrastructure for continuity in public services — Lidia observes that regardless of whether discussions focus on infrastructure, standards, or other technical aspects, hu…
S67
Thinking through Augmentation — AI is prevalent and beneficial, with 11,000 people using it daily at Cineph and achieving incredible results. However, c…
S68
Day 0 Event #171 Legalization of data governance — He Bo: Thank you. Good afternoon, everyone. I’m He Bo from China Academy. Academy of Information and Communication T…
S69
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Soi explains that while the long-term goal is institutional adoption where AI becomes intrinsic to the organization, cur…
S70
WSIS Action Line C7: E-health – Fostering foundations for digital health transformation in the age of AI — In May 2025, the 78th session of the World Health Assembly (WHA) endorsed the extension of the Global Strategy on Digita…
S71
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — I think healthcare is slightly different from a lot of other industries. I think it is highly regulated, number one. So …
S72
From principles to practice: Governing advanced AI in action — – **Implementation Challenges Across Jurisdictions**: Participants highlighted the tension between rapid technological a…
S73
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Max Healthcare’s AI initiatives focused on practical applications including predictive bed analysis, safety measures, an…
S74
Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81 — Hiroshi Honjo:Yes. So pretty much close to what Dr. Balushi said. So as a private company, we kind of state the AI gover…
S75
Advancing Scientific AI with Safety Ethics and Responsibility — The panel ultimately argued for a “web of prevention” approach where multiple complementary measures work together rathe…
S76
Panel Discussion AI in Healthcare India AI Impact Summit — Chris Ciauri provided concrete examples of AI applications already showing results. Banner Health’s use of Claude to sum…
S77
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Roy Jakobs argues that AI provides clinicians with fast and accurate data to support daily work, improving diagnostic ca…
S78
Practical Toolkits for AI Risk Mitigation for Businesses — Nusrat Khan:Thanks, Sarayu. Good morning, everyone, and thank you for being with us. My name is Nusrat Khan, and I work …
S79
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Björn Berge:Thank you very much, Ambassador Schneider, and a very good afternoon to all of you. It’s really great to be …
S80
Scaling AI for Billions_ Building Digital Public Infrastructure — Discussion point:Future Outlook and National Implications
S81
Shaping the Future AI Strategies for Jobs and Economic Development — Discussion point:Infrastructure Challenges and Energy Requirements
S82
AI Infrastructure and Future Development: A Panel Discussion — Order-of-magnitude efficiency improvements are inevitable within 5 years, but will accelerate rather than replace the ne…
S83
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S84
Informal Stakeholder Consultation Session — Emerging technologies like AI, 5G, and quantum computing are reshaping our world, yet they remain concentrated in few ha…
S85
Conversational AI in low income &amp; resource settings | IGF 2023 — Dino Cataldo Dell’Accio:Thank you very much for that question and also for that call to action. So I think the previous …
S86
Opening of the session — The tone began very positively and constructively, with the Chair commending delegations for focused, specific intervent…
S87
https://app.faicon.ai/ai-impact-summit-2026/ai-collaboration-across-borders_-indiaisrael-innovation-roundtable — Thank you. Firstly, it’s been one of a kind of an experience to be part of this AI impactor. In fact, I’ve been around t…
S88
S89
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-in-healthcare-india-ai-impact-summit — I think that shouldn’t be so, right? And coming back, that is where I think it would be great to introduce Dr. Aditya Ya…
S90
DPI+H – health for all through digital public infrastructure — Garrett Mehl:Great, I just wanna thank PATH for helping to organize this session and for also inviting WHO to this impor…
S91
Business Engagement Session: From Infrastructure to Innovation – Norway’s Digital Journey — David Norheim: Thank you, Harald. Good afternoon, everyone. My name is David Nordheim, and like Harald said, I’m the dir…
S92
Cracking the Code of Digital Health / DAVOS 2025 — Gianrico Farrugia: Well, thank you for moderating this panel. And I do want to thank WEF for doing it too, because my…
S93
REDUCED MORTALITY — – ƒ In many cases, the type of healthcare a person receives is still too often a matter of chance. If more digitally ass…
S94
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Pramod Varma:I would now request Pramod Verma to respond to the question. Yeah, I think many of these best practices and…
S95
Capacity Building in Digital Health — Can I make one health care worker serve more than 5 times, 10 times more using the technologies management of the… sys…
S96
Google Cloud urges regulatory intervention in response to Microsoft’s Cloud practices — In a recent development, Google Cloud has intensified its criticism of Microsoft’s cloud computing practices, expressing…
S97
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — Deborah Rogers:I think one of the most interesting examples of how mobile network operators have really had a big impact…
S98
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — In the discussion, several concerns were raised regarding web data, large language models, chat-based search engines, an…
S99
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — Dawit Bekele:Thanks, Susan. Unfortunately, many barriers contribute to the challenges people face in using the internet …
S100
Policy Network on Meaningful Access: Meaningful access to include and connect | IGF 2023 — Keisuke Kamimura:Hi, my name is Keisuke Kamimura. Thank you very much for inviting me on this panel. I am Professor of L…
S101
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Pellerin Matis: I think government can really learn from the private sector because there is lots of technologies and …
S102
Table des matières — – Soutenir les professionnels de la santé dans la tâche du soin en permettant de mieux diagnostiquer, prévenir et prédir…
S103
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — The same process is happening now with AI, but we’re in the early, messy phase where more questions than answers exist. …
S104
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — Bär highlighted the creation of a Sprint company with sandbox laws allowing faster experimentation and seed funding, dem…
S105
Building Sovereign and Responsible AI Beyond Proof of Concepts — And I think that’s the key thing. But the important thing is that if the trust is lost in terms of the sovereignty, the …
S106
Europe’s rush to innovate — Countries that have successfully combined fundamental and applied sciences earlier have been more successful in innovati…
S107
Europe’s rush to innovate | World Economic Forum 2024 — In a session titled ‘Europe’s Pursuit of Innovation,’ a diverse panel of speakers from around the world convened to illu…
S108
IN CONVERSATION WITH BIRAME SOCK — – Embrace failure as a learning opportunity 4. Start small, iterate, and don’t fear failure – view it as a learning o…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Abhay Soi
8 arguments149 words per minute2428 words975 seconds
Argument 1
Data lake built over 15 years for patient records (Abhay Soi)
EXPLANATION
Abhay explains that Max Healthcare created a unified data lake that aggregates patient information spanning the last fifteen years, enabling real‑time access to historical health data. This foundational digital asset supports AI‑driven analytics and improves clinical workflows.
EVIDENCE
He states that they “started by creating a common-size data lake for all the patients which have been through our doors over the last 15 years, and which are doing so on a real-time basis today” [14].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for a culture of data sharing and longitudinal patient records, despite existing infrastructure, is highlighted in [S1], providing context for the 15-year data lake effort.
MAJOR DISCUSSION POINT
Creation of a long‑term patient data lake
AGREED WITH
Nikhil Dhongari, Tanvi Lall, Jigar Halani, Padmini Vishwanath
Argument 2
Safety, supervision, and data‑privacy as non‑negotiables (Abhay Soi)
EXPLANATION
Abhay emphasizes that in healthcare AI, patient safety, rigorous supervision, and strict data‑privacy must be upheld, as errors can have life‑threatening consequences. He warns that unlike education, mistakes in health cannot be easily corrected.
EVIDENCE
He notes that “we have very, very little, you know, standard deviation possible in what we do. And so it requires a large extent of supervision” and that “patient safety, data privacy, these things are right up there” [52-57].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safety and privacy are described as foundational and non-negotiable in healthcare AI in [S2], and reinforced by the emphasis on safety and privacy in AI for children [S25] and ethical considerations in ICTs [S26].
MAJOR DISCUSSION POINT
Non‑negotiable safety and privacy standards
AGREED WITH
Vikalp Sahni, Jigar Halani, Tanvi Lall, Dr. Rajendra Pratap Gupta
DISAGREED WITH
Vikalp Sahni
Argument 3
Institutional culture demands circumspect AI rollout, not just first‑mover advantage (Abhay Soi)
EXPLANATION
Abhay argues that hospitals should adopt AI cautiously, focusing on getting it right rather than rushing to be the first mover. He stresses learning from failures and ensuring robust supervision before wide deployment.
EVIDENCE
He says, “Institutional culture … we have to be very, very circumspect about what you’re going to adopt and what you’re going to roll out” after describing many failures and the need to avoid excuses [90-98].
MAJOR DISCUSSION POINT
Cautious, quality‑first AI adoption
AGREED WITH
Vikalp Sahni, Jigar Halani, Tanvi Lall
DISAGREED WITH
Tanvi Lall
Argument 4
Demographic dividend makes AI a necessity for future healthcare capacity (Abhay Soi)
EXPLANATION
Abhay points out that India’s young population will age over the next decade, creating a massive demand for medical services that existing infrastructure cannot meet. AI is presented as essential to extend capacity through predictive health and remote care.
EVIDENCE
He describes the demographic shift, noting “the average age is 29… 15 years down the line it will be very, very close to the European age” and that “there just isn’t enough infrastructure” and that AI-enabled predictive health is required [128-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The demographic dividend as a major driver for AI adoption in health is discussed in [S2].
MAJOR DISCUSSION POINT
AI as a solution to future healthcare demand
Argument 5
Predictive bed‑occupancy and AI tool deployment improving operational efficiency (Abhay Soi)
EXPLANATION
Abhay mentions that Max Healthcare uses AI to forecast vacant beds and improve safety measures, thereby streamlining patient flow and enhancing operational efficiency.
EVIDENCE
He states, “We’ve already started doing predictive analysis of beds which are vacant and available” and that it is “working on safety measures” [22-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Predictive analysis of vacant beds and related safety measures are directly mentioned in [S24].
MAJOR DISCUSSION POINT
AI‑driven bed occupancy prediction
AGREED WITH
Tanvi Lall, Jigar Halani
Argument 6
AI as an assistive tool for ECG interpretation to improve patient safety
EXPLANATION
Abhay presented a concrete example where AI analysis of ECGs can flag high‑risk patients, prompting admission and preventing missed heart attacks, positioning AI as a safety‑first assistive technology.
EVIDENCE
He described a scenario where a normal-looking ECG missed a heart attack and suggested using AI to err on the side of caution, potentially admitting more patients to avoid missed diagnoses [113-118][119-124].
MAJOR DISCUSSION POINT
AI‑driven safety in acute cardiac care
Argument 7
Automation of clinical data capture reduces clinician time and improves efficiency
EXPLANATION
Abhay explained that earlier clinicians spent considerable time collecting data manually, but now app‑based forms automatically gather information, allowing clinicians to focus on higher‑value clinical tasks.
EVIDENCE
He noted that data collection is now done through forms in their apps, which collate information and reduce the time clinicians spend gathering patient history [24-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Throughput-optimization platforms that automate data capture and free up physician time are described in [S27].
MAJOR DISCUSSION POINT
Digital tools streamlining clinical documentation
Argument 8
AI as low‑hanging fruit for efficiency before full ecosystem integration
EXPLANATION
Abhay emphasized that hospitals can achieve immediate gains by applying AI to simple, task‑level problems such as predictive bed occupancy and workflow automation, before attempting broader institutional integration.
EVIDENCE
He described using AI for predictive bed analysis and safety measures as early wins, labeling them low-hanging fruits prior to full ecosystem adoption [21-23][80-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of low-hanging fruit AI use cases preceding broader ecosystem integration is highlighted in [S29].
MAJOR DISCUSSION POINT
Early‑stage AI use cases for operational gains
V
Vikalp Sahni
6 arguments138 words per minute877 words378 seconds
Argument 1
AI adoption wave and regulatory challenges (Vikalp Sahni)
EXPLANATION
Vikalp asks whether an AI adoption wave is occurring in hospitals and how emerging regulations such as NABH and ABDM affect this uptake. He probes the speed of adoption and potential regulatory bottlenecks.
EVIDENCE
He questions, “…is there like a AI adoption wave that is happening? And do you think, like, when the digital adoption happened, things such as NABH, ABDM, many of these things started coming up, talking about policy, talking about regulation” [30-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The rapid, necessity-driven AI adoption wave and the need to balance regulation are discussed in [S2] and [S31].
MAJOR DISCUSSION POINT
AI adoption amid regulatory frameworks
AGREED WITH
Deepak Tuli, Dr. Rajendra Pratap Gupta, Padmini Vishwanath
Argument 2
Need for policy and regulatory alignment (NABH, ABDM) (Vikalp Sahni)
EXPLANATION
Vikalp highlights the importance of aligning AI initiatives with existing healthcare policies and standards like NABH and ABDM, suggesting that coherent regulation is essential for smooth digital transformation.
EVIDENCE
He explicitly references “NABH, ABDM, many of these things started coming up, talking about policy, talking about regulation” [31-33].
MAJOR DISCUSSION POINT
Policy alignment for AI in health
Argument 3
Trust dynamics between doctors, patients, and AI systems (Vikalp Sahni)
EXPLANATION
Vikalp observes that while innovation is crucial, trust remains the cornerstone; patients traditionally trust doctors, and integrating AI must preserve or enhance that trust relationship.
EVIDENCE
He remarks, “trust is the most important thing… I as a person would trust only the doctor that I have spoken to… The trust factor is kind of getting a little sort of changed” [105-112].
MAJOR DISCUSSION POINT
Balancing trust among stakeholders
AGREED WITH
Abhay Soi, Jigar Halani, Tanvi Lall, Dr. Rajendra Pratap Gupta
DISAGREED WITH
Abhay Soi
Argument 4
Gap between technology speed and organizational readiness (Vikalp Sahni)
EXPLANATION
Vikalp notes a mismatch between the rapid evolution of AI technologies and the slower pace at which health institutions, staff, and leadership can adapt, raising concerns about readiness.
EVIDENCE
He says, “…how are you feeling? how are you feeling? health institutions moving as fast as the technology is moving even people in the organization…” [89-90].
MAJOR DISCUSSION POINT
Speed vs. readiness gap
AGREED WITH
Abhay Soi, Jigar Halani, Tanvi Lall
Argument 5
Vision for AI impact over the next 3‑5 years in Indian hospitals (Vikalp Sahni)
EXPLANATION
Vikalp asks Abhay to project how AI will transform Indian hospitals over the next three to five years, seeking insight into expected advancements and adoption trajectories.
EVIDENCE
He asks, “What is your view on how next three to five years changing in your hospitals or in general health care?” after noting AGI predictions [123-126].
MAJOR DISCUSSION POINT
Future AI timeline for hospitals
Argument 6
AI adoption as a strategic priority (KRA) for hospital CEOs
EXPLANATION
Vikalp questioned whether AI has become a key result area for CEOs, comparing it to earlier digitisation priorities such as online billing and accreditation, indicating a shift in executive focus toward AI.
EVIDENCE
He asked if AI adoption is now a priority for CEOs and part of their KRAs, referencing past priorities like billing, JCIA, and UHS compliance [60-66].
MAJOR DISCUSSION POINT
AI as executive performance metric
N
Nikhil Dhongari
6 arguments151 words per minute707 words279 seconds
Argument 1
Federated architecture enabling Indian AI models (Nikhil Dhongari)
EXPLANATION
Nikhil explains that the ABDM’s federated architecture provides a platform where Indian health data can be used to train AI models tailored to local contexts, avoiding reliance on foreign datasets.
EVIDENCE
He states, “Where ABDM created the federated architecture, where the model can go there and they can be tried… we have access to the longitudinal records of the patients… our Indian models can be tried” [204-209].
MAJOR DISCUSSION POINT
Federated framework for domestic AI
AGREED WITH
Abhay Soi, Tanvi Lall, Jigar Halani, Padmini Vishwanath
Argument 2
Tough policy decisions required for data capture and usage (Nikhil Dhongari)
EXPLANATION
Nikhil argues that decisive policy actions are needed to standardize data capture across hospitals, ensuring that AI pipelines receive high‑quality, consistent inputs.
EVIDENCE
He asks, “Should we wait for something to be adopted by and large to see…? To me, having the first mover advantage … we can’t afford to get it wrong” and later mentions “Should we wait for something…” [418-426].
MAJOR DISCUSSION POINT
Policy mandates for data collection
Argument 3
Behavioral change needed for systematic data capture in hospitals (Nikhil Dhongari)
EXPLANATION
Nikhil highlights that clinicians must shift from paper‑based practices to digital entry, requiring cultural and behavioral adjustments to feed AI systems with reliable data.
EVIDENCE
He notes, “some of the docs are not ready to do, because they said that we are very much accustomed to writing on the paper only… we are accepting” and that “we need some tough stance” to enforce digital capture [418-424].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift toward automated clinical data capture, implying required behavioral change, is covered in [S27]; the broader cultural gap in data sharing is noted in [S1].
MAJOR DISCUSSION POINT
Changing clinician habits for digital data
DISAGREED WITH
Abhay Soi, Dr. Rajendra Pratap Gupta
Argument 4
Indian data essential to avoid bias and ensure relevance of AI models (Nikhil Dhongari)
EXPLANATION
Nikhil stresses that training AI on Indian health data is crucial to prevent bias and to produce models that reflect the country’s diverse population and disease patterns.
EVIDENCE
He says, “Indian data essential to avoid bias and ensure relevance of AI models” and points to the need for Indian-origin datasets [207-209].
MAJOR DISCUSSION POINT
Avoiding bias through local data
Argument 5
HMI solutions for language‑record capture in hospitals supporting AI pipelines (Nikhil Dhongari)
EXPLANATION
Nikhil describes Human‑Machine Interface (HMI) tools that capture patient interactions in multiple languages, feeding structured data into AI pipelines for better model training and service delivery.
EVIDENCE
He mentions, “Where ABDM created HMI solutions, where we have access to the longitudinal records of the patients, where our Indian models can be tried…” [207-209].
MAJOR DISCUSSION POINT
Multilingual HMI for AI data pipelines
Argument 6
Need for conversational AI models to serve low‑literacy, multilingual users
EXPLANATION
Nikhil argued that AI solutions must go beyond transactional interactions and be conversational, supporting users with low literacy through voice‑first designs, ensuring relevance across India’s diverse population.
EVIDENCE
He stated that models should be conversational, not just transactional, to serve low-literacy users and handle regional languages, emphasizing voice-first approaches for inclusivity [208-210][282-285].
MAJOR DISCUSSION POINT
Conversational, voice‑first AI for inclusive access
J
Jigar Halani
5 arguments190 words per minute1209 words380 seconds
Argument 1
Voice translation across regional languages as core infrastructure (Jigar Halani)
EXPLANATION
Jigar argues that a robust voice‑translation layer linking regional languages (e.g., Tamil to Hindi) is a foundational AI service that can bridge linguistic gaps in Indian healthcare.
EVIDENCE
He explains, “If I understand what Tamil doctor is speaking with the patient and convert it into Hindi and have that deployed in Delhi and Gujarat, I think I’m home essentially” and that this solves many problems [327-332].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Technical challenges of building multilingual AI systems, including voice translation, are identified in [S10] and reiterated in [S33].
MAJOR DISCUSSION POINT
Regional language voice translation
AGREED WITH
Nikhil Dhongari, Tanvi Lall
Argument 2
Accuracy and personalization as key trust factors (Jigar Halani)
EXPLANATION
Jigar states that trust in AI hinges on delivering highly accurate, personalized results, which aligns with user expectations for health outcomes.
EVIDENCE
He says, “Trust in my language could be the most accurate results…” indicating accuracy as a trust pillar [219-222].
MAJOR DISCUSSION POINT
Accuracy‑driven trust
AGREED WITH
Abhay Soi, Vikalp Sahni, Tanvi Lall, Dr. Rajendra Pratap Gupta
Argument 3
Preference for India‑hosted servers to address privacy and cost (Jigar Halani)
EXPLANATION
Jigar recommends hosting AI services on Indian servers to reduce latency, lower costs, and comply with data‑privacy regulations, emphasizing a locally controlled infrastructure.
EVIDENCE
He notes, “Do you use ChatGPT? … One of the servers I hosted over in India… the cost factor is also there and they have data privacy also there” [365-371] and adds that edge solutions may be needed for remote use [376-380].
MAJOR DISCUSSION POINT
Local server hosting for privacy & cost
AGREED WITH
Abhay Soi, Nikhil Dhongari, Tanvi Lall, Padmini Vishwanath
Argument 4
Cloud vs. edge decisions for multilingual voice AI, cost and privacy considerations (Jigar Halani)
EXPLANATION
Jigar discusses the trade‑offs between cloud and edge deployment for multilingual voice AI, noting that edge may be required for low‑connectivity settings while cloud offers scalability, with both impacting cost and data privacy.
EVIDENCE
He outlines, “If you have a very particular use case, very tiny one in a remote place, edge would be the solution… you don’t have a choice” and earlier describes the voice-translation cloud approach [327-332] and [376-380].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Considerations of deployment models (cloud vs. edge) for multilingual voice AI are discussed in [S10].
MAJOR DISCUSSION POINT
Choosing cloud vs. edge for voice AI
AGREED WITH
Abhay Soi, Tanvi Lall
DISAGREED WITH
Audience member 1
Argument 5
Mindset change, not technology, is the primary barrier to AI adoption
EXPLANATION
Jigar emphasized that the biggest challenge to AI uptake is shifting stakeholder mindsets to trust AI, rather than technical limitations, highlighting cultural acceptance as essential for deployment.
EVIDENCE
He remarked, “It’s not a technology problem. It’s a mindset problem,” and noted that trust varies among users such as his mother versus doctors [333-337].
MAJOR DISCUSSION POINT
Cultural acceptance as key enabler
T
Tanvi Lall
5 arguments190 words per minute810 words254 seconds
Argument 1
Personalized AI solutions require education and transformation (Tanvi Lall)
EXPLANATION
Tanvi argues that AI in health must be personalized and context‑specific, which demands extensive user education, stakeholder engagement, and a transformation of existing workflows.
EVIDENCE
She notes that “AI is meant to be personalized and context-specific… there is a lot of opportunity to bridge inequity gaps… you have to spend time with people who will be adopting it, educating them” [277-306].
MAJOR DISCUSSION POINT
Education‑driven personalized AI
AGREED WITH
Abhay Soi, Jigar Halani
Argument 2
Building trust through education, awareness, and continuous feedback (Tanvi Lall)
EXPLANATION
Tanvi emphasizes that trust in AI emerges from ongoing education, awareness‑raising, and feedback loops that involve clinicians and patients throughout the solution lifecycle.
EVIDENCE
She describes how “the demo phase goes really well… but after three months it becomes a side window… you have to think from the start of this as a journey… building trust… education, awareness, trust-building activities” [288-306].
MAJOR DISCUSSION POINT
Trust via continuous education
AGREED WITH
Abhay Soi, Vikalp Sahni, Jigar Halani, Dr. Rajendra Pratap Gupta
Argument 3
Transformation journey must go beyond pilots to embed AI in workflows (Tanvi Lall)
EXPLANATION
Tanvi warns that pilot projects alone are insufficient; sustainable impact requires integrating AI into daily clinical workflows and scaling from pilot to population level.
EVIDENCE
She observes, “the demo phase goes really well… after three months this is just a side window… you have to think from the start of this as a journey… not just a one-time contract” [291-298].
MAJOR DISCUSSION POINT
From pilot to systemic integration
AGREED WITH
Abhay Soi, Vikalp Sahni, Jigar Halani
Argument 4
Importance of AI‑ready data systems and open data sharing for model training (Tanvi Lall)
EXPLANATION
Tanvi highlights that high‑quality, AI‑ready datasets and open data sharing are critical for building effective models, praising institutions that contribute data back to the community.
EVIDENCE
She says, “I think AI-ready data systems… we cite that as an example we are working closely with Mosby… institutions are not extractive when it comes to data but they want to give it back” [351-357].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The lack of a data-sharing culture despite infrastructure [S1] and the role of automated data capture in creating AI-ready datasets [S27] provide supporting context.
MAJOR DISCUSSION POINT
Open, AI‑ready data ecosystems
AGREED WITH
Abhay Soi, Nikhil Dhongari, Jigar Halani, Padmini Vishwanath
Argument 5
AI can bridge inequity gaps through regional‑language, voice‑first solutions
EXPLANATION
Tanvi highlighted that AI designed for low‑resource regional languages and voice interfaces can reduce health inequities by reaching underserved populations and building trust.
EVIDENCE
She described building AI solutions in regional, low-resource languages and voice-first designs to foster trust and inclusion for disadvantaged groups [277-283].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multilingual AI for inclusive health services and outreach to underserved communities is highlighted in [S10] and the extension of AI to rural settings via mobile vans in [S28].
MAJOR DISCUSSION POINT
Voice‑first, multilingual AI for inclusive health
D
Deepak Tuli
2 arguments141 words per minute947 words402 seconds
Argument 1
Moderator’s call for policy‑driven outcomes and future roadmap (Deepak Tuli)
EXPLANATION
Deepak, as moderator, asks Dr. Gupta to comment on how policy should shape AI outcomes and outlines the need for a clear roadmap linking public and private health sectors.
EVIDENCE
He asks, “Do you look at policy… private sector, PSC… do you think health is one single sector or do you start defining…?” prompting a policy-focused response [310-312].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for responsible AI frameworks and regulatory harmonisation to guide outcomes is discussed in [S31] and [S34].
MAJOR DISCUSSION POINT
Policy‑guided AI roadmap
AGREED WITH
Vikalp Sahni, Dr. Rajendra Pratap Gupta, Padmini Vishwanath
Argument 2
Need for a unified policy roadmap linking public and private health sectors for AI implementation
EXPLANATION
Deepak asked how policy should address both private and public sectors, indicating the necessity of a cohesive regulatory framework to guide AI deployment across the entire health system.
EVIDENCE
He inquired whether health should be treated as a single sector or differentiated between private and public, seeking guidance on policy design [310-314].
MAJOR DISCUSSION POINT
Integrated policy framework for AI across health sectors
D
Dr. Rajendra Pratap Gupta
4 arguments192 words per minute849 words264 seconds
Argument 1
ABDM history, 860 million digital IDs and policy integration (Dr. Rajendra Pratap Gupta)
EXPLANATION
Dr. Gupta outlines the evolution of the Ayushman Bharat Digital Mission (ABDM) from its inception in 2014, culminating in the creation of 860 million ABHA IDs, and stresses its role in building a national digital health infrastructure.
EVIDENCE
He recounts, “It started actually in 2014… we have 860 million ABHA IDs… we have created the digital infrastructure” [178-186].
MAJOR DISCUSSION POINT
ABDM’s scale and policy foundation
AGREED WITH
Vikalp Sahni, Deepak Tuli, Padmini Vishwanath
Argument 2
Ethical prescribing practices and regulation to safeguard trust (Dr. Rajendra Pratap Gupta)
EXPLANATION
Dr. Gupta argues that unethical prescribing habits undermine trust and that regulation, not technology alone, is needed to enforce ethical standards in clinical practice.
EVIDENCE
He states, “It’s about ethics and doing the right things… how many doctors would actually want to tell what they charge… we need to regulate that unethical part” [409-414].
MAJOR DISCUSSION POINT
Regulating prescribing ethics
AGREED WITH
Abhay Soi, Vikalp Sahni, Jigar Halani, Tanvi Lall
Argument 3
COVID‑driven acceleration of digital health and rapid implementation (Dr. Rajendra Pratap Gupta)
EXPLANATION
Dr. Gupta cites the COVID‑19 pandemic as a catalyst that accelerated digital health adoption, enabling rapid vaccination tracking and showcasing the potential of digital tools in crisis response.
EVIDENCE
He mentions, “We had COVID… 2.2 billion people… getting vaccinated… just going to the app and getting it done” [191-192].
MAJOR DISCUSSION POINT
Pandemic as a digital health catalyst
Argument 4
Optimism about rapid digital health implementation within a three‑year horizon
EXPLANATION
Dr. Gupta expressed confidence that digital health initiatives can be realized within three years, reflecting a shift from decade‑long expectations to much shorter implementation cycles.
EVIDENCE
He noted that the decade is now talked about as three years max, indicating accelerated implementation expectations [190-194].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Accelerated timelines for digital health rollout, moving from decade-long expectations to a three-year horizon, are mentioned in [S2].
MAJOR DISCUSSION POINT
Accelerated timeline for digital health rollout
P
Padmini Vishwanath
3 arguments143 words per minute451 words188 seconds
Argument 1
Normative guidance for equitable AI deployment (Padmini Vishwanath)
EXPLANATION
Padmini stresses the need for normative frameworks that ensure AI tools are equitable, especially when deployed in low‑resource settings, advocating for standards that guide ethical AI use.
EVIDENCE
She says, “We work on creating norms and normative guidance to ensure that AI is equitable… pilots are looking at reversing this logic… developing readiness frameworks for the most remote settings” [316-325].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Normative frameworks and regulatory harmonisation for equitable AI deployment are addressed in [S31] and [S34].
MAJOR DISCUSSION POINT
Equitable AI through normative standards
AGREED WITH
Vikalp Sahni, Deepak Tuli, Dr. Rajendra Pratap Gupta
Argument 2
Emerging focus on empathy, dignity, and qualitative care dimensions (Padmini Vishwanath)
EXPLANATION
Padmini notes a shift from purely quantitative AI metrics toward qualitative aspects such as empathy, dignity, and patient‑caregiver relationships, especially in palliative care pilots.
EVIDENCE
She observes, “We are seeing more discussions around the qualitative dimensions… empathy, dignity, care… in a pilot on palliative care” [352-356].
MAJOR DISCUSSION POINT
Qualitative care in AI evaluation
Argument 3
Prioritizing modernization of legacy systems over building new ones for AI readiness
EXPLANATION
Padmini suggested that health systems should first modernize existing legacy infrastructure before adding new solutions, to create a solid foundation for equitable AI deployment.
EVIDENCE
She recommended “slow down a little… modernize existing legacy systems rather than building new ones” to ensure equitable AI adoption [324-325].
MAJOR DISCUSSION POINT
Legacy system modernization as prerequisite for AI
A
Audience member 1
2 arguments186 words per minute135 words43 seconds
Argument 1
Audience query on optimal hosting architecture for 22‑language voice translation (Audience member 1)
EXPLANATION
The audience member asks whether a multilingual voice‑translation solution should be hosted on the edge, cloud, or a hybrid architecture, seeking guidance on cost, connectivity, and privacy considerations.
EVIDENCE
He asks, “what model should… be hosted? On the edge, on a gadget, or hybrid?” after describing the 22-language target [372-375].
MAJOR DISCUSSION POINT
Hosting architecture for multilingual voice AI
DISAGREED WITH
Jigar Halani
Argument 2
Voice AI solutions require continuous connectivity; offline operation is limited
EXPLANATION
The audience member asked about the feasibility of offline voice translation, and Jigar clarified that voice AI needs an active connection and cannot function fully offline.
EVIDENCE
Jigar responded that “You can’t be having offline things… voice is something you need to have connectivity in play” when asked about offline capability [381-384].
MAJOR DISCUSSION POINT
Connectivity requirements for voice‑based AI
A
Audience member 2
1 argument121 words per minute52 words25 seconds
Argument 1
Concern over reliance on global datasets versus Indian‑origin data (Audience member 2)
EXPLANATION
The audience member wonders how much Indian AI tools depend on foreign data versus indigenous datasets, highlighting concerns about relevance and bias.
EVIDENCE
He says, “I am curious how far this AI, Indian based AI tools are relying on Indian data rather than global data sets” [390-393].
MAJOR DISCUSSION POINT
Local vs. global data reliance
A
Announcer
1 argument134 words per minute146 words65 seconds
Argument 1
Emphasis on multi‑stakeholder collaboration for AI in health
EXPLANATION
The announcer introduced representatives from the National Health Authority, WHO, and NVIDIA, underscoring the importance of cross‑sector partnership to drive AI‑enabled health initiatives in India.
EVIDENCE
The announcer introduced the Director IT at the National Health Authority, a WHO researcher, and NVIDIA’s Director of Enterprise Solutions Architecture, highlighting their roles in national digital health architecture and AI strategy [157-162].
MAJOR DISCUSSION POINT
Cross‑sector collaboration for AI in health
Agreements
Agreement Points
Robust, locally sourced data infrastructure is a prerequisite for effective AI in health
Speakers: Abhay Soi, Nikhil Dhongari, Tanvi Lall, Jigar Halani, Padmini Vishwanath
Data lake built over 15 years for patient records (Abhay Soi) Federated architecture enabling Indian AI models (Nikhil Dhongari) Importance of AI‑ready data systems and open data sharing for model training (Tanvi Lall) Preference for India‑hosted servers to address privacy and cost (Jigar Halani) Prioritizing modernization of legacy systems over building new ones for AI readiness (Padmini Vishwanath)
All speakers stress that a solid, interoperable data foundation-whether a long-term data lake, a federated ABDM architecture, AI-ready datasets, or modernised legacy systems-and keeping the data within India are essential before AI can deliver value. [14][204-209][351-357][365-371][324-325]
POLICY CONTEXT (KNOWLEDGE BASE)
Governance discussions stress that making data AI-ready requires coordinated standards and often local (edge) processing to meet security and compliance needs [S52][S62], and a cultural shift toward data sharing is essential [S67].
Trust, safety and privacy are non‑negotiable for AI adoption in healthcare
Speakers: Abhay Soi, Vikalp Sahni, Jigar Halani, Tanvi Lall, Dr. Rajendra Pratap Gupta
Safety, supervision, and data‑privacy as non‑negotiables (Abhay Soi) Trust dynamics between doctors, patients, and AI systems (Vikalp Sahni) Accuracy and personalization as key trust factors (Jigar Halani) Building trust through education, awareness, and continuous feedback (Tanvi Lall) Ethical prescribing practices and regulation to safeguard trust (Dr. Rajendra Pratap Gupta)
The panel agrees that AI can only be deployed when it demonstrably protects patients, respects privacy, delivers accurate results and is embedded in ethical clinical practice; trust must be earned through supervision, education and regulation. [52-57][105-112][219-222][288-306][409-414]
POLICY CONTEXT (KNOWLEDGE BASE)
Patient privacy, data protection and bias mitigation are repeatedly described as non-negotiable, e.g., Tercova’s emphasis on privacy and bias [S56] and the EU GPAI code’s focus on security foundations [S58]; safety is highlighted as paramount in healthcare AI [S51].
AI rollout should be cautious, culturally ready and embedded beyond pilots
Speakers: Abhay Soi, Vikalp Sahni, Jigar Halani, Tanvi Lall
Institutional culture demands circumspect AI rollout, not just first‑mover advantage (Abhay Soi) Gap between technology speed and organizational readiness (Vikalp Sahni) Mindset change, not technology, is the primary barrier to AI adoption (Jigar Halani) Transformation journey must go beyond pilots to embed AI in workflows (Tanvi Lall)
All participants highlight that successful AI adoption requires a deliberate, supervised approach, addressing organisational mindset, training and ensuring solutions move from demo to routine practice. [90-98][89-90][333-337][291-298]
POLICY CONTEXT (KNOWLEDGE BASE)
Cautious implementation is urged because of patient safety and cultural readiness concerns in India [S51], and scaling beyond pilots remains limited, indicating a need for broader integration [S60]; balanced policy pathways are advocated [S61].
Early, low‑hanging‑fruit AI use cases can deliver immediate efficiency gains
Speakers: Abhay Soi, Tanvi Lall, Jigar Halani
Predictive bed‑occupancy and AI tool deployment improving operational efficiency (Abhay Soi) Automation of clinical data capture reduces clinician time and improves efficiency (Abhay Soi) Personalized AI solutions require education and transformation (Tanvi Lall) Cloud vs. edge decisions for multilingual voice AI, cost and privacy considerations (Jigar Halani)
The group agrees that applying AI to concrete operational problems-predicting bed availability, automating data entry, and deploying scalable infrastructure-yields quick wins and builds a foundation for broader integration. [22-27][24-27][277-283][376-380]
POLICY CONTEXT (KNOWLEDGE BASE)
Reducing administrative burden and paperwork was identified as a pervasive low-hanging-fruit use case for Indian healthcare AI, offering quick efficiency gains [S54].
Coherent policy, standards and regulatory frameworks are essential to steer AI adoption
Speakers: Vikalp Sahni, Deepak Tuli, Dr. Rajendra Pratap Gupta, Padmini Vishwanath
AI adoption wave and regulatory challenges (Vikalp Sahni) Moderator’s call for policy‑driven outcomes and future roadmap (Deepak Tuli) ABDM history, 860 million digital IDs and policy integration (Dr. Rajendra Pratap Gupta) Normative guidance for equitable AI deployment (Padmini Vishwanath)
All agree that AI must be guided by clear, harmonised policies (ABDM, NABH, normative frameworks) and regulatory mechanisms to ensure safety, equity and scalability across public and private health sectors. [30-33][310-314][178-186][316-325]
POLICY CONTEXT (KNOWLEDGE BASE)
The need for coordination, standardisation, and institutional alignment is highlighted as a core governance challenge [S52]; practical policy pathways are called for to guide adoption [S61], and privacy/security frameworks are deemed foundational [S58].
Multilingual and voice‑first AI solutions are critical to bridge digital divides in Indian healthcare
Speakers: Jigar Halani, Nikhil Dhongari, Tanvi Lall
Voice translation across regional languages as core infrastructure (Jigar Halani) Need for conversational AI models to serve low‑literacy, multilingual users (Nikhil Dhongari) AI can bridge inequity gaps through regional‑language, voice‑first solutions (Tanvi Lall)
The speakers converge on the need for AI that understands and speaks multiple Indian languages, often via voice-first designs, to reach underserved populations and reduce health inequities. [327-332][208-210][282-285][277-283]
POLICY CONTEXT (KNOWLEDGE BASE)
Voice technology and multilingual capabilities are flagged as crucial horizontal solutions for India’s linguistic diversity [S53], and multilingual barriers are repeatedly cited as a major challenge [S54].
Similar Viewpoints
Both stress that the rapid evolution of AI technologies outpaces the ability of hospitals to adopt them safely, calling for measured, readiness‑based implementation. [90-98][89-90]
Speakers: Abhay Soi, Vikalp Sahni
Institutional culture demands circumspect AI rollout, not just first‑mover advantage (Abhay Soi) Gap between technology speed and organizational readiness (Vikalp Sahni)
Both identify cultural acceptance and continuous education as the key levers to build trust in AI, rather than purely technical solutions. [333-337][288-306]
Speakers: Jigar Halani, Tanvi Lall
Mindset change, not technology, is the primary barrier to AI adoption (Jigar Halani) Building trust through education, awareness, and continuous feedback (Tanvi Lall)
Both argue that clinicians and users must change habits and receive training to feed AI pipelines with quality data and to realise personalized AI benefits. [418-424][277-283]
Speakers: Nikhil Dhongari, Tanvi Lall
Behavioral change needed for systematic data capture in hospitals (Nikhil Dhongari) Personalized AI solutions require education and transformation (Tanvi Lall)
Unexpected Consensus
Mindset and trust are viewed as the primary barrier to AI adoption across both public‑sector policy makers and private‑sector technologists
Speakers: Dr. Rajendra Pratap Gupta, Jigar Halani
Ethical prescribing practices and regulation to safeguard trust (Dr. Rajendra Pratap Gupta) Mindset change, not technology, is the primary barrier to AI adoption (Jigar Halani)
It is unexpected that a senior public-sector policy figure (Dr. Gupta) and a private-sector technology leader (Jigar) converge on the idea that cultural mindset, rather than technical or regulatory constraints, is the decisive factor for AI uptake. [409-414][333-337]
POLICY CONTEXT (KNOWLEDGE BASE)
Human factors such as fear of replacement and the need for clear communication are major barriers [S50]; trust is described as the foundational requirement for AI uptake [S57]; overall acceptance is seen as the underlying challenge [S66].
Overall Assessment

The panel shows strong convergence on six core themes: (1) the necessity of a robust, Indian‑centric data foundation; (2) trust, safety and privacy as non‑negotiable; (3) the need for cautious, culturally ready rollout; (4) leveraging low‑hanging‑fruit efficiency gains; (5) aligning AI with coherent policy and regulatory frameworks; (6) delivering multilingual, voice‑first solutions to bridge digital divides.

High consensus across speakers and stakeholder groups, indicating a shared understanding that technical readiness, ethical safeguards, policy alignment and inclusive design are all essential for scaling AI in Indian healthcare. This consensus suggests that future initiatives are likely to prioritize data infrastructure, trust‑building, and policy support before pursuing large‑scale AI deployments.

Differences
Different Viewpoints
Approach to AI adoption – cautious, low‑hanging‑fruit rollout versus holistic transformation that requires education, trust‑building and integration beyond pilots
Speakers: Abhay Soi, Tanvi Lall
Institutional culture demands circumspect AI rollout, not just first‑mover advantage (Abhay Soi) Transformation journey must go beyond pilots to embed AI in workflows; need education, awareness and continuous feedback (Tanvi Lall)
Abhay stresses that hospitals should adopt AI cautiously, focusing on early, low-hanging-fruit use cases and ensuring safety before broader integration [90-98]. Tanvi argues that pilots often stall after three months and that lasting impact requires a full transformation with education, awareness and feedback loops to build trust and embed AI into daily workflows [288-306]. The two speakers share the goal of effective AI use but disagree on the primary pathway to achieve it.
POLICY CONTEXT (KNOWLEDGE BASE)
Adoption beyond pilots remains uneven, reflecting tension between incremental and holistic strategies [S60]; a balanced view that acknowledges both risks and opportunities is advocated [S61].
Primary barrier to AI uptake – safety and supervision versus trust between patients, doctors and AI systems
Speakers: Abhay Soi, Vikalp Sahni
Safety, supervision, and data‑privacy as non‑negotiables (Abhay Soi) Trust dynamics between doctors, patients, and AI systems (Vikalp Sahni)
Abhay highlights that patient safety, strict supervision and data-privacy are the non-negotiable foundations for AI in healthcare, warning that errors can be life-threatening and require extensive oversight [52-57]. Vikalp, on the other hand, stresses that trust is the most important factor for patients and clinicians, and that the introduction of AI may shift the traditional trust relationship with doctors [105-112]. Both see trust and safety as crucial, but they prioritize different aspects as the main barrier.
POLICY CONTEXT (KNOWLEDGE BASE)
Safety and patient protection are emphasized as non-negotiable in healthcare AI deployments [S51]; trust is equally highlighted as essential for adoption across stakeholders [S57].
Readiness of data sharing culture – existence of a comprehensive data lake versus a missing culture of data sharing and need for behavioral change
Speakers: Abhay Soi, Nikhil Dhongari, Dr. Rajendra Pratap Gupta
Creation of a long‑term patient data lake (Abhay Soi) Behavioral change needed for systematic data capture in hospitals (Nikhil Dhongari) Culture of data is missing (Dr. Rajendra Pratap Gupta)
Abhay describes a unified data lake covering 15 years of patient records that is updated in real-time [14]. Nikhil points out that many clinicians still rely on paper, requiring cultural and behavioral shifts to feed AI pipelines with reliable digital data [418-424]. Dr. Gupta adds that despite the digital infrastructure, a culture of data sharing is still lacking, limiting the usefulness of the ABHA IDs [398-399]. Thus, there is disagreement on how mature the data-sharing environment actually is.
POLICY CONTEXT (KNOWLEDGE BASE)
Data readiness is framed as a governance gap requiring cultural change toward sharing, as noted in AI governance discussions [S52] and explicit calls for a data-sharing culture shift [S67].
Optimal deployment architecture for multilingual voice AI – edge versus cloud (and cost/privacy considerations)
Speakers: Jigar Halani, Audience member 1
Cloud vs. edge decisions for multilingual voice AI, cost and privacy considerations (Jigar Halani) Audience query on optimal hosting architecture for 22‑language voice translation (Audience member 1)
Jigar explains that most voice-translation services run in the cloud, but for very remote or low-connectivity use-cases edge deployment may be required, also noting cost and data-privacy benefits of India-hosted servers [365-371][376-380]. The audience asks whether a 22-language solution should be hosted on edge, cloud or hybrid, seeking concrete guidance [372-375]. The differing viewpoints illustrate a lack of consensus on the best architectural approach.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on edge versus cloud focus on privacy, security, and compliance for local data processing versus cloud orchestration for scalability [S62][S63].
Unexpected Differences
AI as a strategic priority for CEOs – Vikalp’s doubt versus Abhay’s affirmation
Speakers: Vikalp Sahni, Abhay Soi
AI adoption as a strategic priority (KRA) for hospital CEOs (Vikalp Sahni) AI has clearly become a priority for hospitals (Abhay Soi)
Vikalp questions whether AI has risen to the level of a CEO KRA, comparing it to earlier digitisation priorities [60-66]. Abhay responds with “No, I think clearly, clearly it has.” indicating that AI is already a priority [67]. The contrast between questioning and affirmation was not anticipated given the overall consensus on AI importance.
POLICY CONTEXT (KNOWLEDGE BASE)
Surveys show CEOs rank AI integration as a top strategic priority despite cost concerns [S64]; however, many leaders report challenges achieving ROI, reflecting divergent executive views [S65].
Overall Assessment

The panel shows broad consensus that AI is vital for India’s health sector, but there are notable divergences on how to implement it: (1) cautious, low‑hanging‑fruit roll‑out versus a deep, education‑driven transformation; (2) whether safety/supervision or trust is the primary barrier; (3) the maturity of data‑sharing culture; and (4) the optimal technical architecture for multilingual voice AI. These disagreements reflect differing institutional perspectives (hospital operator vs. policy makers vs. technology providers) and point to the need for coordinated strategies that address safety, trust, data governance, and infrastructure choices together.

Moderate – while all participants agree on AI’s importance, the lack of alignment on rollout strategy, trust vs. safety emphasis, data culture, and deployment architecture could slow coherent national progress unless reconciled.

Partial Agreements
All speakers concur that AI will play a pivotal role in India’s health system and that policy, infrastructure and trust‑building measures are needed, but they differ on the sequencing, governance mechanisms and technical focus (e.g., low‑hanging‑fruit vs. federated architecture vs. normative frameworks) [67-71][30-33][277-283][204-209][327-332][178-186][316-325].
Speakers: Abhay Soi, Vikalp Sahni, Tanvi Lall, Nikhil Dhongari, Jigar Halani, Dr. Rajendra Pratap Gupta, Padmini Vishwanath
AI is essential for future healthcare capacity (Abhay Soi) AI adoption wave and regulatory challenges (Vikalp Sahni) Personalized AI solutions require education and transformation (Tanvi Lall) Federated architecture enabling Indian AI models (Nikhil Dhongari) Voice translation across regional languages as core infrastructure (Jigar Halani) ABDM history, 860 million digital IDs and policy integration (Dr. Rajendra Pratap Gupta) Normative guidance for equitable AI deployment (Padmini Vishwanath)
Takeaways
Key takeaways
Robust digital infrastructure (e.g., a 15‑year patient data lake) is the foundation for AI in Indian hospitals. AI adoption is progressing but is constrained by regulatory, safety, data‑privacy and ethical considerations; supervision remains essential. Trust—between patients, clinicians and AI systems—is a non‑negotiable factor; AI must augment, not replace, clinicians, especially for safety‑critical decisions. Institutional culture and behavioral change are critical; AI rollout must be circumspect, education‑driven and embedded beyond pilot projects. Demographic pressure makes AI a necessity for future healthcare capacity; predictive health, bed‑occupancy forecasting and home‑care are priority use‑cases for the next 3‑5 years. Indian‑specific data and models are required to avoid bias; current Indian health data is still fragmented and needs better capture and sharing. Technical architecture decisions (cloud vs. edge, India‑hosted servers) must balance latency, cost, connectivity and privacy, especially for multilingual voice translation. Policy evolution (ABDM, NABH, inclusion of private sector) is creating a unified digital health backbone, but tougher policy mandates are needed for data capture and ethical prescribing.
Resolutions and action items
Max Healthcare will continue building in‑house AI capabilities (ICD‑11 tagging, predictive bed‑occupancy, clinician‑support forms) with a safety‑first approach. Nikhil Dhongari’s team will provide free e‑shift/flight solutions to small hospitals to enable language‑record capture and promote digital prescriptions. Jigar Halani recommends adopting a use‑case‑driven hosting model: edge for remote low‑bandwidth scenarios, cloud (preferably India‑hosted) for most other multilingual voice AI workloads. Tanvi Lall urges builders to design transformation journeys that include education, feedback loops and long‑term workflow integration rather than one‑off pilots. Dr. Rajendra Pratap Gupta emphasizes leveraging the ABDM digital backbone (860 M ABHA IDs) to create a unified public‑private health data ecosystem. All participants agreed to prioritize AI‑enabled safety checks (e.g., ECG triage) before scaling efficiency‑focused applications.
Unresolved issues
Exact regulatory framework and timelines for AI approval, especially concerning ICD‑11 tagging and AI‑driven clinical decision support, remain undefined. How to achieve sufficient, high‑quality Indian health data for training robust AI models; current ABDM records are still limited. Optimal architecture for large‑scale multilingual voice translation (cloud vs. edge, synchronization frequency) was discussed but no consensus reached. Mechanisms to systematically shift clinician behavior toward digital documentation and away from paper‑based processes are still lacking. Standardized methods for continuous patient feedback into AI models to build trust were mentioned but not concretized. Enforcement of ethical prescribing practices and integration of real‑time prescription audit systems remain open challenges.
Suggested compromises
Adopt AI as an assistive safety tool first, keeping human oversight, before expanding to efficiency‑driven functions. Balance first‑mover enthusiasm with a cautious, supervised rollout to avoid patient‑safety risks. Implement a hybrid cloud‑edge strategy: use edge devices where connectivity is poor, otherwise leverage centralized India‑hosted cloud for scalability and cost control. Include both private and public sectors in health policy mandates to ensure uniform data standards and interoperability. Run pilot projects with built‑in feedback and education components, then scale only after demonstrated workflow integration and trust metrics. Encourage data sharing by institutions while maintaining privacy through in‑house anonymization tools, enabling Indian‑specific model training.
Thought Provoking Comments
“The true test of technology, and that will be the true test of AI as well, is when you don’t interface with technology, but the experiences are improved.”
Highlights that successful AI integration should be seamless and invisible to end‑users, shifting focus from flashy tools to real patient outcomes.
Set the tone for the discussion on AI’s role in healthcare, prompting later speakers to consider usability and patient‑centric metrics rather than just technical capabilities.
Speaker: Abhay Soi
“We have a lot of failures… like Edison, you would have found out every way to fail, and I think perhaps the only way to succeed will be in front of us.”
Frames failure as an essential part of innovation, encouraging a culture of experimentation rather than fearing setbacks.
Encouraged Vikalp and other panelists to acknowledge the numerous pilot projects and the need for iterative learning, leading to deeper discussion on why many AI pilots stall after initial demos.
Speaker: Abhay Soi
“When a patient comes to ER with chest pain, an AI tool can flag admission risk even if the ECG looks normal to the doctor, potentially preventing a missed heart attack.”
Provides a concrete, high‑stakes clinical scenario where AI acts as a safety net, illustrating the immediate value of AI in critical care.
Shifted the conversation from abstract efficiency gains to patient safety, prompting Jigar and others to discuss trust, supervision, and the ethical imperative of AI in life‑saving decisions.
Speaker: Abhay Soi
“Innovation is definitely important, but trust is key. Patients will only trust the doctor they have spoken to, and introducing AI changes that trust dynamic.”
Challenges the assumption that technology adoption alone drives progress, emphasizing the central role of patient‑doctor trust in healthcare adoption.
Prompted Abhay to discuss how AI must be positioned as an assistive tool rather than a replacement, and led to later remarks about building trust through transparency and regulatory frameworks.
Speaker: Vikalp Sahni
“For the first time the national health policy explicitly mentions both private and public sectors, breaking the barrier between them to deliver care.”
Signals a major policy shift that could harmonize standards and incentives across the entire health ecosystem, affecting AI deployment strategies.
Redirected the dialogue toward systemic integration, influencing Tanvi and Nikhil to talk about scaling solutions across diverse settings and the need for unified data standards.
Speaker: Dr. Rajendra Pratap Gupta
“Instead of building AI for the most advanced tertiary centers and then adapting it, we should develop readiness frameworks for the most remote settings first; this reverses the usual logic and improves equity.”
Introduces a novel, equity‑focused design principle that challenges the top‑down approach to AI deployment in low‑resource environments.
Inspired discussion on contextualizing AI for multilingual, low‑resource contexts (Jigar’s language example) and reinforced Tanvi’s point about personalized, region‑specific transformations.
Speaker: Padmini Vishwanath
“AI is personalized and context‑specific; building a solution is not just a tech stack but a transformation that includes education, trust‑building, and moving from pilot to population scale.”
Broadens the conversation from technology implementation to systemic change management, emphasizing the socio‑technical ecosystem needed for sustainable AI adoption.
Guided the panel to consider long‑term adoption challenges, leading Nikhil and Jigar to discuss data sharing, model trust, and the necessity of behavioral change in clinicians.
Speaker: Tanvi Lall
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a surface‑level overview of AI hype to a nuanced exploration of practical, ethical, and systemic challenges. Abhay’s emphasis on invisible technology and learning from failure set a foundation that framed AI as a silent enabler rather than a headline feature. Vikalp’s reminder of trust and the policy shift highlighted by Dr. Gupta reframed the conversation around the broader ecosystem—patients, regulators, and both public and private providers. Padmini’s equity‑first design principle and Tanvi’s transformation‑focused roadmap deepened the analysis, steering participants toward concrete strategies for scaling AI responsibly. Collectively, these comments redirected the dialogue toward patient safety, trust, data readiness, and systemic integration, establishing a more grounded and forward‑looking perspective on AI in Indian healthcare.

Follow-up Questions
What are the specific challenges hindering faster AI adoption in hospitals, and what measures can accelerate adoption?
Identifying barriers and enablers is essential to scale AI solutions across healthcare institutions.
Speaker: Vikalp Sahni
Has AI adoption become a priority KPI/KRA for hospital CEOs and operators, similar to earlier digitization priorities?
Understanding strategic priority informs resource allocation and leadership focus.
Speaker: Vikalp Sahni
How ready are health institutions, including clinicians and staff, to keep pace with rapid AI advancements?
Institutional readiness determines the speed and success of AI integration.
Speaker: Vikalp Sahni
What are the expected developments in AI for hospitals over the next three to five years?
A forward‑looking view helps planners and investors align with emerging opportunities.
Speaker: Vikalp Sahni
How is patient trust evolving regarding AI tools, and how do doctors currently perceive AI solutions?
Trust is a critical factor for adoption and for ensuring patient safety.
Speaker: Vikalp Sahni
What worked and what challenges remain in implementing ABDM, especially regarding interoperability and its impact on clinical decision‑making?
Lessons from ABDM rollout can guide future policy and technical improvements.
Speaker: Deepak Tuli (to Dr. Rajendra Pratap Gupta)
How can the lessons from ABDM deployment in the public sector be applied to the private sector to deepen workflow integration and improve outcomes?
Transferring successful public‑sector practices to private hospitals could accelerate nationwide AI impact.
Speaker: Deepak Tuli (to Nikhil Dhongari)
What strategies can Indian AI model builders use to build trust among physicians and operators so solutions are adopted?
Trust‑building mechanisms are needed for clinicians to rely on AI recommendations.
Speaker: Deepak Tuli (to Jigar Halani)
For multilingual voice and translation AI solutions targeting many languages, should the inference engine be hosted on edge devices, the cloud, or a hybrid architecture?
Architecture choice affects latency, cost, and data‑privacy compliance.
Speaker: Audience member 1 (addressed by Jigar Halani)
To what extent do Indian AI healthcare tools rely on Indian patient data versus global datasets?
Data locality influences model relevance, bias, and regulatory acceptance.
Speaker: Audience member 2 (addressed by Dr. Rajendra Pratap Gupta)
What critical gaps still exist in India’s health‑AI ecosystem, and what outcomes can be expected in the coming year?
Pinpointing gaps directs policy focus and investment for near‑term impact.
Speaker: Deepak Tuli
How many AI models are currently being trained on Indian health data, and what is needed to increase this number?
Quantifying locally‑trained models highlights the need for data infrastructure and talent.
Speaker: Nikhil Dhongari
Area for further research: Develop affordable, accurate ICD‑11 tagging solutions for Indian healthcare data.
Current ICD‑11 tools are expensive and ineffective, limiting coding compliance and analytics.
Speaker: Abhay Soi
Area for further research: Evaluate the effectiveness of predictive bed‑availability analytics on patient flow and clinical outcomes.
Early wins need rigorous validation to justify broader deployment.
Speaker: Abhay Soi
Area for further research: Create frameworks for AI readiness in remote, low‑resource healthcare settings, focusing on device availability and workflow integration.
Ensuring equity requires tailored readiness models for underserved regions.
Speaker: Padmini Vishwanath
Area for further research: Study qualitative impacts of AI on empathy, dignity, and caregiver‑patient interaction, especially in palliative care.
Balancing quantitative metrics with human‑centred outcomes is essential for ethical AI use.
Speaker: Padmini Vishwanath
Area for further research: Design feedback loops that allow patients to feed second‑opinion outcomes back into AI models to improve trust and accuracy.
Patient‑generated feedback can enhance model learning and credibility.
Speaker: Jigar Halani
Area for further research: Investigate policy mechanisms to enforce medical ethics and prescription practices using AI‑driven monitoring.
Addressing unethical prescribing is crucial for safe, effective AI integration.
Speaker: Dr. Rajendra Pratap Gupta
Area for further research: Develop strategies for transitioning from pilot AI projects to population‑scale deployments in healthcare.
Sustainable scaling beyond demos is needed for lasting impact.
Speaker: Tanvi Lall
Area for further research: Assess the impact of data‑sharing initiatives (e.g., MCP server) on AI model development and personalization in health care.
Understanding how open data accelerates model performance informs data‑policy decisions.
Speaker: Tanvi Lall
Area for further research: Examine the role of edge versus cloud computing for AI in healthcare, considering cost, latency, and data privacy in the Indian context.
Infrastructure choices affect feasibility, especially in low‑bandwidth or privacy‑sensitive environments.
Speaker: Jigar Halani

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026

AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how AI-driven operations can build customer trust in telecom services, emphasizing that while users may not interact directly with AI models, the outcomes of decisions such as outage management and grievance handling affect them and require clear, proactive communication[13-15]. Speakers highlighted the need for a “human-in-the-loop” approach to prevent autonomous systems from making unchecked decisions and to balance spam and fraud reduction with privacy and regulatory compliance[19-21]. Julian Gorman described the “scam economy” in which regulators lag behind technically sophisticated, well-funded scammers and explained that GSMA has created a Cross-Sector Anti-Scam Task Force involving 39 organisations from 17 countries to coordinate industry-wide initiatives[26-34][35-40]. He argued that innovation must be outcome-focused rather than rule-based, and that global collaboration-especially for 5G deployment and cross-border data sharing-is essential for India to assume a leadership role in the telecom ecosystem[44-53].


Dr Rajkumar Upadhyay presented CDOT’s AI portfolio, noting that the “Fraud Pro” system de-duplicates SIM registrations by matching images and demographics, has disconnected 70 lakh fraudulent connections, and is being extended to other government databases[65-71][101-103]. He also cited AI applications that identified dead bodies in a train accident, provided a financial-risk indicator used by banks, and powered the crowdsourced “Chakshu” platform and the widely downloaded “Sanchar Sati” app for real-time call blocking[76-84][86-92]. In disaster management, Upadhyay’s AI-enabled platform aggregates alerts from IMD, CWC and other agencies, uses geo-targeted cell-broadcast messages, and has reduced cyclone-related deaths in Odisha to zero, a solution now being promoted internationally[250-266][271-276].


Mathan Babu Kasilingam outlined the service-provider perspective, stating that AI adoption is guided by pillars of reasonability, reliability, trust and privacy, and that his organization is ISO 27701 certified and has implemented privacy-by-design for five years[111-119]. He described the “quick-win” strategy of deploying AI in isolated functions such as fraud detection and network self-healing, but warned that siloed data repositories and massive GPU infrastructure create inefficiencies that are now being addressed through a unified AI platform and central LLMs[134-152][162-170]. Kasilingam emphasized that consolidating data enables easier compliance with DPDP regulations and reduces duplication, while also supporting the scaling and refinement of AI models across the telecom sector[184-188].


Syed Abbas introduced a voluntary AI-incident reporting schema that classifies incidents by type, severity and affected subsystem, providing a standardized database that can help service providers analyse failures and assist regulators in shaping AI policy[193-199][201-203]. He argued that such a framework, though not mandatory, offers value by creating a common taxonomy for incident data, facilitating cross-industry learning and enabling more effective mitigation of AI-related errors[206-210][218-219]. The discussion concluded with consensus that responsible AI, supported by collaborative standards, data-sharing mechanisms and regulatory sandboxes, is crucial for safeguarding customers and advancing the digital ecosystem, a view echoed in the moderator’s closing remarks[328-330].


Keypoints


Major discussion points


AI as a tool for building customer trust in telecom – The panel stressed that AI-driven services (outage management, fraud and spam prevention, grievance handling) can improve efficiency but the responsibility for decision integrity remains with the service provider, requiring clear communication and human-in-the-loop controls [13-21]. Concrete implementations were highlighted, such as CDOT’s “Fraud Pro” system that identified and disconnected fraudulent SIM connections, and the Sanchar Sati app that enabled users to discover and block unauthorized numbers, resulting in 70 lakh connections being disconnected [65-71][101-103].


Cross-sector collaboration and data sharing to combat scams – Julian Gorman described the GSMA’s Cross-Sector Any-Scam Task Force, which brings together more than 39 organisations from 17 countries (including Meta, Google, TikTok, AWS) to share case studies and develop industry-wide anti-scam strategies [26-35][36-40]. He later emphasized the need for standardized APIs and privacy-enhanced data sharing across operators, ecosystems and regulators to create “four pillars” of scam mitigation [282-298][306-307].


Emerging voluntary standards for AI-incident reporting – Syed Tausif Abbas presented the TEC AI-incident schema and taxonomy, outlining fields for incident description, severity, affected components and mitigation steps. Although adoption is voluntary, the standard aims to give service providers a structured way to record and analyse unintended AI outcomes, helping both operators and regulators refine AI models [193-199][201-203][210-219].


Operational challenges of AI adoption: data silos, infrastructure cost, and privacy – Mathan Babu Kasilingam highlighted that early AI pilots often created isolated data repositories and required massive GPU-heavy infrastructure, driving up costs (≈ 80-90 % of AI spend on compute) [149-166][225-230]. He advocated for a consolidated AI platform with centralized data, privacy-by-design certification (ISO 27701) and reusable LLMs to reduce duplication, improve security and lower total cost of ownership [111-119][164-170][184-187].


AI-enabled disaster management and public-safety services – Rajkumar Upadhyay described a unified AI-driven platform that ingests alerts from IMD, CWC, FRI, etc., fuses them, and issues geo-targeted cell-broadcast warnings (e.g., for Cyclone Montha). The system has reduced cyclone-related fatalities in Odisha from thousands to zero and is being promoted internationally as a UN-endorsed early-warning solution [241-276].


Overall purpose / goal of the discussion


The session aimed to explore how AI can be responsibly deployed in telecom operations to strengthen customer trust, enhance fraud-prevention and disaster-response capabilities, and establish industry-wide best practices-including standards and collaborative frameworks-while balancing privacy, regulatory compliance, and cost considerations.


Tone of the discussion


The conversation maintained a professional and collaborative tone throughout, with panelists emphasizing optimism about AI’s benefits and the importance of joint innovation. When addressing scams and regulatory constraints, the tone became more urgent and cautionary, underscoring the need for swift, coordinated action. Overall, the dialogue remained constructive, focusing on solutions and shared responsibility rather than conflict.


Speakers

Moderator


Role/Title: Technology Security and Data Privacy Officer, Vodafone India Limited


Area of Expertise: Cyber security, data privacy, governance


Dr. M P Tangirala


Role/Title: Chair / Session Moderator (Panel on Building Customer Trust Through AI‑Driven Operations)


Area of Expertise: AI applications in telecom, customer trust, data privacy


Mr. Julian Gorman


Role/Title: Representative, GSMA – Head of APAC GSMA [​S6][​S7]


Area of Expertise: Telecom industry collaboration, anti‑scam initiatives, regulatory policy, AI‑driven solutions


Dr. Rajkumar Upadhyay


Role/Title: CEO, Centre for Development of Telematics (CDOT) [​S3][​S4]


Area of Expertise: Telecom AI solutions, fraud detection, disaster‑management systems, quantum‑AI


Syed Tausif Abbas


Role/Title: Senior Deputy Director General (DDG) and Head, Telecom Engineering Centre (TEC); also holding additional charge as CMDTCIL [​S1][​S2]


Area of Expertise: Telecom standards, AI incident‑reporting schema, regulatory frameworks


Mathan Babu Kasilingam


Role/Title: Senior executive, telecom service provider (representing service providers on the panel)


Area of Expertise: AI adoption in telecom operations, privacy‑by‑design, fraud & cyber‑security, AI infrastructure consolidation


Anil Kumar Jha


Role/Title: Principal Advisor, Telecom Regulatory Authority of India (TRAI) [​S8]


Area of Expertise: Telecom regulation, policy alignment, anti‑fraud strategies


Additional speakers:


– None identified beyond the listed participants.


Full session reportComprehensive analysis and detailed insights

The session opened with the moderator introducing the panel’s senior experts – the Technology Security and Data Privacy Officer of Vodafone India and the Senior DDG and Head of TEC, Mr S T Abbas – before handing over to Dr M P Tangirala to launch the discussion on “Building Customer Trust Through AI-Driven Operations” [1-8]. Tangirala framed trust as the cornerstone of any AI deployment in telecoms, noting that customers rarely interact with AI models directly but experience the outcomes in outage management, service continuity and grievance handling [13-15][15-16]. He emphasized that responsibility for decision integrity remains with the service provider and must be backed by clear, proactive communication, and called for a “human-in-the-loop” to prevent unchecked autonomous decisions [19-21].


Julian Gorman of the GSMA then outlined the newly formed Cross-Sector Any-Scam Task Force, created about 12 months ago and comprising more than 39 organisations across 17 countries-including Meta, Google, TikTok and AWS-to identify and prioritise anti-scam initiatives [26-35][36-40]. He highlighted GSMA’s proof-of-concept data-sharing work with Virginia Tech and a “foundry” pilot that demonstrates secure cross-industry data exchange [41-44]. Gorman argued for outcome-focused regulation rather than prescriptive rule-making, proposing regulatory sandboxes that enable privacy-enhanced data sharing. He distilled the anti-scam approach into four pillars – securing the network, exposing risk data through open APIs, offering protective “hard-hat” services to customers, and continuously upskilling digital skills [298-307].


Dr Rajkumar Upadhyay of CDOT presented several AI-driven use-cases. He described the Fraud Pro platform, which de-duplicates SIM registrations by matching facial images, names, father’s name and other demographics, having disconnected roughly 70 lakh illegal numbers from the network [65-71][101-103]. Complementary tools such as the crowdsourced Chakshu service and the widely downloaded Sanchar Sati app use fuzzy-logic AI to let users discover and block unauthorised numbers [86-92][96-100]. Upadhyay also noted that CDOT now offers a fully AI-based cyber-security solution to defend against AI-powered attacks [215-220]. In disaster management, a unified AI platform ingests alerts from the Indian Meteorological Department, the Central Water Commission and other agencies, fuses the data and issues geo-targeted cell-broadcast warnings, reducing cyclone-related fatalities in Odisha from thousands to zero and earning ITU recognition as a model for UN-endorsed early-warning solutions [250-276]. He added that AI predicts likely-to-fail routers and nodes in BharatNet 1 & 2, enabling proactive network-failure mitigation [240-245]. Additional AI applications cited included identifying dead bodies in the Balasore train accident and a financial-risk indicator now mandated by the RBI to stop high-risk money transfers [76-84].


Collectively, the speakers affirmed that AI is a powerful tool for large-scale fraud and scam mitigation, provided responsible oversight and collaborative data sharing are in place [65-71][139-140][298-307].


Mathan Babu Kasilingam described his organisation’s AI adoption framework built on the pillars of reasonability, reliability, trust and privacy[111-119]. The company has been ISO 27701 (PIMS) certified for five years, embodying a privacy-by-design approach [111-119]. Early “quick-win” pilots delivered fraud detection and self-healing network functions but created fragmented data silos [148-166] and drove 80-90 % of AI spend to compute-heavy GPU workloads [225-230]. To overcome these inefficiencies, Kasilingam outlined a move toward a single, secure AI data repository and a central large-language-model (LLM) platform accessible via enterprise APIs, simplifying DPDP (Data Protection and Privacy) compliance and reducing duplication [162-170][184-188].


Syed Tausif Abbas introduced a voluntary AI-incident reporting schema developed by TEC, defining thirty fields covering incident description, taxonomy, severity, affected subsystem and mitigation steps, enabling operators to log unintended AI outcomes in a standardised database [193-199]. Although adoption is not mandatory, Abbas argued that the common taxonomy will help service providers analyse failures, refine models and give regulators a clearer evidence base for AI policy [201-203][210-219].


During the Q&A, Anil Kumar Jha asked the panel to suggest “two steps globally and two steps for India”. Gorman responded that (a) the industry must prove that cross-border data sharing can be done securely, and (b) the global community should act collectively through GSMA’s “United Against Scams” programme [350-357][358-363].


The moderator concluded the session with final remarks about distributing mementos, taking a group photograph and the audience’s applause, signalling a collective commitment to advance AI-driven telecom operations within a secure, transparent and collaborative framework [380-388].


Key take-aways


– Trust-by-design and outcome-focused regulation are essential; ISO 27701 (PIMS) certification, a voluntary incident-reporting schema, and regulatory sandboxes were highlighted as practical pathways [111-119][193-199][298-307].


– AI delivers measurable impact in fraud prevention (≈ 70 lakh illegal connections removed) and disaster response (zero cyclone fatalities in Odisha) [101-103][250-276].


– Collaborative data sharing through open-gateway APIs, cross-sector task forces and proof-of-concept pilots is critical for scaling anti-scam solutions [26-35][36-40][41-44][298-307].


– A unified AI repository and central LLM platform, coupled with DPDP-compliant practices, address data-silo challenges and reduce compute costs [148-166][225-230][162-170][184-188].


Action items


– GSMA to continue proof-of-concept data-sharing pilots and develop privacy-enhanced sandbox frameworks [380-388].


– CDOT to promote Fraud Pro, Chakshu, Sanchar Sati and the disaster-alert platform for international adoption [380-388].


– Service providers to consolidate fragmented data into a single AI repository and deploy a central LLM infrastructure [380-388].


– TEC to publicise the voluntary AI-incident reporting standard and encourage uptake [380-388].


– Regulators to maintain ISO 27701 certification while enabling open-gateway APIs for ecosystem partners [380-388].


Unresolved issues requiring further work include legal mechanisms for secure cross-border data exchange, pathways to make the incident-reporting schema mandatory, funding models for shared AI infrastructure, and the development of quantitative metrics to assess AI’s impact on trust and fraud reduction [291-298][310-319][322-326]. Suggested compromises involve offering the incident-reporting schema on a voluntary basis with regulatory incentives, combining quick-win siloed pilots with a longer-term unified platform strategy, and pairing privacy-by-design certifications with standardised, privacy-enhanced APIs to balance global data sharing with user protection [291-298][310-319][322-326].


Overall, the discussion demonstrated strong consensus on the strategic importance of AI for telecom trust, while highlighting moderate disagreements on data-sharing scope, regulatory approaches and the extent of human oversight-points that will shape coordinated policy development moving forward.


Session transcriptComplete transcript of the session
Moderator

Technology Security and Data Privacy Officer at Vodafone India Limited, with over 20 years of experience in cyber security domain and governance structure. Rounding off the panel, we welcome Mr. S .T. Abbas, Senior DDG and Head TEC, also holding additional charge CMDTCIL, with over 35 years of experience in telecom standards, certifications, spectrum management and network regulation. I would request all the panelists to please come forward for a quick photograph Thank you, sirs. Please take your seats. Let’s engage deeply on how to balance information. Innovation with privacy and trust. I now hand over to Dr. Tangirala Ji to begin the session. Thank you.

Dr. M P Tangirala

Chairman, member, Mr. Mitter, distinguished delegates, my fellow panelists, I welcome everyone to this second session. The clock is already ticking, so I will be brief in my opening remarks because I come between the audience and the distinguished panelists, which I don’t intend to do. The session title is Building Customer Trust Through AI -Driven Operations. The importance of trust was highlighted, among others, by Mr. Shantigram Jagannath as well, when he was speaking about AI through telecom networks and the at -scale problems that we could try and solve. Thank you. Now, while customers may not interact with AI models directly, they are affected by the outcomes of the decisions. And therefore, you know, whether it’s outage management, service continuity, grievance handling, you know, while efficiencies may improve, the responsibility for decision integrity ultimately remains with the telecom service providers.

And clear and proactive communication with the customers would become very important. And that is where, you know, there are impactful applications of AI in telecoms, in spam and fraud prevention, which a person had mentioned in his opening remarks about how 2 .1 million numbers were disconnected using AI -based tracking. But the challenge is also that we need to reduce this spam. while minimizing false positives, avoiding customer inconvenience, and fully respecting privacy and regulatory requirements. So that is always a big concern. Then, of course, this whole issue of the human in the loop or human in the mix. We need this automation to have an element of human control that is so that the system does not run away with its own decisions.

So we have, for all these issues and more, we have eminent speakers here, both from the service providers, from the R &D, and as well as from the standard -setting body of DOT. I will request each of them to give their thoughts, and then maybe a few of you… Both of them have presentations to make. I’ll request them to keep it to about five minutes or so, so that we have time for further discussion. Thank you.

Mr. Julian Gorman

And the reason for it is in the scam economy, regulation cannot move as fast as scammers. Scammers are not bound by geography. They’re not bound by laws. They’re very technically capable and they’re very well funded. They have all the things that mobile operators would like to have. I think it’s important to understand that we have to focus on stimulating innovation. At GSMA, about 12 months ago, we formed a coalition called Cross -Sector Any Scam Task Force. It involves more than 39 organisations from 17 countries, including the social media platforms, so Meta, Google, TikTok, AWS. And the aim was to drive or identify and prioritise initiatives and activities that we could do as an industry to help combat. Now, one of those activities was let’s gather what the industry is doing.

Now, in just the last couple of months, across Asia Pacific, we’ve gathered case studies of more than 40 instances where operators without regulation have developed, implemented, and used successfully some sort of strategy or service to combat scam. And I think that’s an indication, along with GSMA’s globally working with people like Virginia Tech and with our foundry, with our proof of concept around data sharing, is the industry is focused on this. And the danger, of course, of implementing service -based rules is they restrict innovation in the future. And so we really need to focus on outcomes when it comes to regulation. And I think we all universally subscribe to the fact we need to combat scam. We need to work together.

And it’s not just the people in this room. We need to collaborate and work across the ecosystem. to make that possible. I think those principles actually also apply in the broader sort of sense of the term is how do we grow 5G, how do we make 5G meaningful to the whole economy, to all users. It’s about stimulating that ecosystem and making sure that they are using 5G and 4G and mobile broadband into meaningful solutions for the population. And the important thing also for India is India is rising not just economically but also in its position in the telecom world and the GSMA sort of global ecosystem is India is a real telecom superpower and it’s on the rise.

And that means actually it cannot just be worried about its domestic situation. actually it has to embrace that statesman role to be a global leader. And so actually considering cross -border, how does India play its role in a global ecosystem are critical to actually the sustainability and growth of the global ecosystem of which India’s vision is dependent on. It cannot exist alone. And I think it’s important that when we focus on innovation and solving things like scam, it is as part of a global community. It’s not just a national community. And so the actions we take, the innovations we look to stimulate have to be part of that global solution. Thank you.

Dr. M P Tangirala

That was thought -provoking, some of the things that you said about collaborative innovation or innovation through collaboration. We will come to that in a bit when we go for the questions. So, with that now may I request Dr. Rajkumar Upadhyay, CEO of CDOT for his presentation and opening remarks.

Dr. Rajkumar Upadhyay

Respected Chairman, Mr. Lahoti, Mr. Mittal, Mr. Tanglura fellow panelists, industry leaders, policy makers experts, ladies and gentlemen thank you for inviting me here I think in the previous session there was talk about how do you optimize your network how do you self heal your network how do you make correction in the network so I’m not going to talk about that even though we also as India we have developed our own 4G and 5G we used because we were the late comers so we used quite a bit of AI in terms of predicting the faults because a lot of logs are generated by various systems so I’m not going to talk about that I’m going to talk about where is the PPT?

where is the PPT? So I’m going to talk about some use cases which we have developed during last few years. We are CDOT. We were established in 1984, and we had the legacy of developing the rural telecommunication. We work in primarily three to four areas, the mobile wireless, cyber security, information security that is done by quantum. Quantum AI is a horizontal thing and advanced telecom applications. But I will focus on these are our product line, and all of these products actually use AI because AI is so pervasive. Without AI, you cannot function. So all these product lines, whether it is mobile, whether it is cyber security, information security, disaster management applications, are using AI in a big way.

So one of the key application, key product what we have developed is Fraud Pro. What it does, it actually detects the fraudulent connections in the system. I think you may be aware with the cases of Jamtara, Mewat and all these sim factories running. And these sim factories were destroyed by this particular software. What it does, it groups all the images of the same person because if you go and buy 500 sims using same Aadhaar card or same driving license, this is what was happening. So it detects that and it not only matches the images, it also matches the demographics, name, father, name. And sees that whether names and photos are same but names are different. So using, I will come to the number.

I think some number was described in the beginning that how many connections were disconnected using this software. So this is deduplication and finding the, and in fact we developed for telecom, it is being used now, going to be used in driving license, passports, income tax, deduplication, Manrega deduplications. The second one, I think this, it mentions the AI. AI analysis, 86%. 7 crore mobile numbers. and it was very well used even to, you know, find out dead bodies in the Balasore train accident. The first use case of this particular platform was to identify the dead bodies. The second one is financial risk indicator. I think you would have seen in newspapers RBI has mandated the banks to use financial risk indicator.

What it does, if A is transferring money to B, so B, the credentials of B are checked with the platform which we have developed, which we call digital intelligence platform. And the platform returns a figure that this is risky number, is medium risk or low risk. If it is a high risk number, the bank will not happen, let that transition happen. And it has saved a lot of fraud cases currently. And all the banks actually are using this FRI, which is able to tell that the B number where the money is going is a, you know, dangerous number or a well -identified fraudulent number, the money is stopped. The next is the Chakshu. Chakshu is again crowdsourcing platform.

wherein if you get a fraudulent call or a promotional call or any kind of call or fake KYC or faking as police, you can report and using crowdsourcing, we are able to disconnect the number and we are able to take. And this is again using our Sanchar Sati app. Just to bring to the notice of the audience, rarely a government app has a download of 18 million plus. This is 18 million plus downloads are there. The hits in the website of Sanchar Sati are 25 crore. Very rarely you see that. So this shows the popularity of how customers are protected using the AI -based platform. This is again AI -based platform. This Tapcoff and CIR here, I don’t know how many of you have used.

I would request those who have not used, please use it. In Sanchar Sati, you go, you give your mobile number, it will tell you all the connections under your name. using fuzzy logic, fuzzy AI and fuzzy logic. It doesn’t ask you any other detail. And that number also we ask because we want to verify that it is you and the OTP sent. Otherwise, no other details are asked. Just by the detail, we are able to find out how many numbers are there. And just to bring to your notice, 70 lakh connections have been disconnected using this. People have themselves disconnected their numbers because it also tells you, this is not my number, disconnect it. This was a big problem for us.

When we blocked the SIMs in the country, these guys went outside. And they started pumping calls using Indian numbers. It’s spoof call. This technology is available. I can get call from my own number in using this. We were getting 15 million calls per day, 15 million. And now you see, this was a very complex system because when the call hits at the gateway, the system has to decide within millisecond that this call to be let it go through or block it. it has to be decision has to be millisecond and it has to be zero error because no actual call should be blocked and today we are able to do our because the rigorous testing happened with all the operators today we are able to we have totally neutralize this of course they have found another way they are they have taken sims in places like Cambodia Indonesia Myanmar from there they’re calling so again the AI based system is alerting that these are the numbers of that country and we are alerting the governments of that country AI based security solution because cyber is another major area for all of us and somebody was mentioning the AI will do cyber attacks and it’s true we see in our system AI attacking the systems earlier it was human and now it is fully AI so you have to use AI to counter it so the cyber security solution today what we provide is fully AI based so that it can coordinate between various particular solutions disaster management we have used AI you may be aware that India has deployed ITU CAP based disaster management system as well as 3GPP cell broadcast based disaster management system which is implemented across India we use AI here because what happens like IMD is giving me a warning on rain CWC is giving me a warning on flood weather report is coming so we federate all these inputs and using AI you have less than 2 minutes so this I won’t go through NMS of course we use AI to see that how the when the network is likely to be down and this is actually implemented in Bharatnet 1 and 2 it tells you that this is likely to fail this router is misbehaving or this node is misbehaving so that was my last slide so in nutshell what I am saying that the a lot of AI applications are needed for the customer side to protect the customers and we have made India has made a good progress in terms of reducing the frauds reducing the fraudulent connections therefore safeguarding the customers and we will be very happy to take these technologies to any part of the world given it is implemented at India scale thank you so much

Dr. M P Tangirala

thank you Dr. Rupajay that was very interesting the flavor of the kind of R &D that has been done and the apps that have been developed we now move to someone on the panel Mr. Matan Babu Kashi Lingam who is representing the service providers and you carry a lot of burden of customer expectations on your shoulders so do tell us about your thoughts on the topic of today’s thank you thank you

Mathan Babu Kasilingam

So a few things that we have done as a service provider, majority of the topics are touched upon from fraud and cyber security. I’m trying to say the role of AI in terms of establishing trust, our entire whole ecosystem, which is telecom ecosystem, relies primarily on the customer trust. So to ensure that we have given trust journey for our customers and in the means of adopting AI, there are various core secure pillars that we have followed through. Any AI adoption should have the reasonability, reliability, trust, privacy should come to deliver that. So as a TSP, when we have embarked on the journey of AI, that’s the first and foremost core element that we have taken into consideration.

We are one of the TSPs in the country who have taken the journey of privacy since past five plus years now. We are completely certified on PIMS ISO 27701. We are the only TSP in the country who have governed privacy by design and have certified ourselves against that as well. So that is to only ensure that the trust is given back to the customer. Now I will come back on the journey of AI adoption. First thing that has happened is consumerization of AI. AI has been part and parcel of our life since all of us learned about Siri, Alexa. Day to day home we have been living with AI for many, many years now. So the consumerization of AI happened many, many years back.

What happened in enterprises, there came the pressure of adoption of AI in enterprise. In that, the first and foremost thing that we did is we took as applied in consumer. Let us try and adopt it in AI. Enterprises as well. well. Obviously, it has its own benefit. The benefit being it shares quick win, right? So you get to see a first yearly win that you are able to see by deploying AI in your setup. So how enterprises embarked on that journey is you pick and choose one department, one function, one key problem that you are faced with, deploy AI in it, and you see results. So we saw all of these examples. Fraud. It’s a serious problem for the entire country as a whole.

What can we do? Can we leverage AI? And AI is capable of giving me million eyes and million hands in name of a single human operating that, right? So the power of AI came to aid. We are able to today identify fraud. Sir also briefly touched upon cyber security. So as national critical infrastructures as what we are as TSPs, today we are pressed with serious amount of attacks. So India apparently in the past one year has hosted as many mega significant events. whether it is G20, whether it is Mahakum, then there is situational geopolitical tensions that we went by and now we are hosting AI summit. So national critical infrastructures like TSPs are also faced with increased volume of cyber attacks increased volume if I tell would be not 10 times it would be in as many count that I could multiply with that is a quantum of increased cyber attacks that we are seeing now in the cyber field we are also limited with the number of professionals we have.

So the power of AI not just for the attackers as defenders as well we have started leveraging how can we leverage the power of AI to combat them. So we have, so those are quick wins right? Network operations with the advent of 5G we wanted self operating self healing networks. So in various smaller smaller areas where AI can be embarked that we can realize a very very quick business value, enterprises started adopting That’s the first part that we wanted the quick win, we saw the quick win. The challenge that came with that is we started seeing them in piecemeal approach. The data that we were working upon was almost similar. You gather this intelligence information from the same network elements and nodes.

But you started to look upon through different lenses. All that I need to do is to look through different lens. But I started creating individual siloed repository of data. So if you look at corporates today that have embarked AI, you will see as many isolated silos of data created for them. Because each of them want their lens. And for every lens, they didn’t see the data through the lens. They created a total isolation of the data. Second thing that happened is mammoth amount of infrastructure. Anybody who touches AI today talks about GPUs, humongous. Power that is required to run, etc., etc. That again at enterprise. Thank you. it is siloed data, siloed infrastructure that has been taken into account.

So the journey that we are today in is we had the quick wins, we have taken the first few steps, but we are re -looking at from a different standpoint as we see currently. So we have stepped back. Is there data deduplication that can be done today? In lieu of 20, 30 silos that I have created, do I want to create one single repository of this data? Thereby the secure element also becomes easier. If in silo I have to secure everywhere, bring them in one area, I have the ability to secure them well. Can I leverage a common platform infrastructure, which is the AI infrastructure that is required to put the data and then do these work?

We are doing that. So you can still leverage a comprehensive LLMs, individual businesses in variety of functions, I have taken their own purpose -built LLMs, right? Because… You will have a HR function. The provider for HR is a specific, say SAP, would be primarily driven for HR. And surrounding systems which are talking AI would have built on top of it. There will be a self -healing network. The network provider builds an AI -driven system. So there we are now stepping back to see can we build a comprehensive central LLM, which will still deliver the purpose that are required for looking at. So at V, the premise is core infrastructure, put data in comprehensive, expose them through interconnected enterprise API architecture, thereby businesses and users do not have to talk to the data directly.

They talk through the enterprise model, touch the AI infrastructure, and go and reach back the data for various reasons. So it could be to service my service provider. It could be to service my customers. It could be my customer support. Thank you. bridging them. That’s a platform journey that we are doing that. So this consolidation, like I told, privacy is by design. We are able to do the DPDP compliance inclusive, which is minimize the data. Data in one area, we are able to minimize them as appropriate. That’s what I wanted to share with. Thank you.

Dr. M P Tangirala

Thank you. Fascinating. Now we come to Mr. Abbas, who is the senior DDG from TEC standard setting. He’s promised that he will make a different presentation. So over to you, Mr. Abbas.

Syed Tausif Abbas

the name of application what are the technology used what is the purpose like that then what was the impact or harm information with the what was the incident like physical harm environmental property psychological so these things also forms part of the 30 key fields in which the input is to be given for the schema and then some of the information which is to be masked later on so those related to the name of the submitter email and other things related to submitter information which will be redacted later on similarly the taxonomy as earlier I told that it will classify the incident into different categories depending upon the incident type whether it is a subcategory as network description service quality outage or it is a security beach or AI mismanagement or then affected system whether core is affected whether the radio access network is affected whether the edge is affected or IOT components or physical so which part of the network is affected or any application which is related to user is affected and then what is the incident severity whether it is critical high moderate or low so that also will be recorded and cause of failure if it is known to the user otherwise the deployer or the service provider has to enter this what was the cause of the failure so basically this database will give input to the service provider also that they themselves can examine it they can analyze it and then realign their AI related application so that these incidents don’t recur in future so it’s a gradual auto development of their own AI system which will be then error free and gives the best results output so for this is only this standard has been made but it is not going to give any mitigation mechanism or something.

This is to be decided by the deployer who has deployed those AI application and it is not mandatory. Just as a beginning, when the new computer system came, initially there was not much thing but when the incident started then computer emergency response team was proposed and it started working on collecting the data related to the computer incident. So similarly since the AI has already begun, so we should have this mechanism in place so that we can have the AI incident reporting database also available. Thank you so much.

Dr. M P Tangirala

Thank you so much Mr. Abbas. Presenting arguably and congratulations arguably world’s first time that such a standard has been put out. So since we are fresh off with you, I will start with a question for you about what you have just now presented. you said it is not mandatory it is voluntary of course we will see where that journey goes as you said about certain coming in after the computers but can you tell us a little bit more about what value it offers to the telecom service providers if they voluntarily adopt this standard

Syed Tausif Abbas

so telecom service providers they have already started using the AI application in their network optimization network and services to the users orchestration of resources so many things already the AI application has started so if any incident which gives an unintended outcome if it is recorded and reported then it will be in the best interest of the service provider that those incidents are analyzed and then rectified for so that it is not occurring in the future so in this way it is a can be best utilized by the service provider and since the structure of the schema and taxonomy both are given so it will be a same structure compilation of data which every service provider is doing so that will give benefit to the regulator and policy makers to how to go about the AI policy because of those input which we get from those incidents.

Dr. M P Tangirala

So therefore, Mr. Kashi Lingam, would you think it offers a voluntary adoption of this standard offers any benefit to you from the side of a service provider?

Mathan Babu Kasilingam

I think like sir rightly mentioned about incident recording has been not a new phenomenon for at least people who have been in the IT industry. Recording cyber specific incident additionally has also started happening. However, we have tied back to the same ITIL framework that has been there historically followed. So enhanced AI is yet another tool which is landing up creating possibly an outcome. The outcome could be erroneous. It could be an event, incident, bias could be one of the situation that are arising. So as TSPs, individually while we have started doing this internally as we have adopted the journey of AI for us, these are recorded events. But one manner that it helps and supports in the framework as TEC has put across is, yes, it can be streamlined in a manner that the rest of the populace, if they have to refer by, can also be referred.

Because today there are no standalone companies, right? Every company is in the area. They are in the area of digital and IT. They are only doing. work in their own function. If you ask a bank, bank has to tell that I am an IT company in the service of doing banking. That is how it is changed. So IT plays a crucial role and AI will be a supporting arm in that. So this record keeping will make the ability for us to scale our AI and models as appropriately. With the advent that India wants to, and we have already announced three LLMs coming our way already, homegrown, home developed here, a platform like this will possibly help us manage and then refine our models well.

Dr. M P Tangirala

So you mentioned how enterprises are becoming digital first. And you also spoke in your initial remarks about AI for enterprises. So how do you look at this controlling costs of, you know, costs the infra part you did deal with, but how about the costs? Of AI for enterprises? Any thoughts on that?

Mathan Babu Kasilingam

Currently, it is still a significant amount that is being incurred upon. So the cost optimization, a larger chunk of cost optimization comes from the infrastructure as a whole. So about 80 -90 % of the cost to AI goes primarily on the infra in itself, both in store and as well in compute. The rest obviously comes in the skills. So today, while we definitely showcase the world that we have humongous talents that are getting built in the AI area, for an enterprise still to have these skilled engineers to build upon AI is still an adaptable work -in -progress area. So I think in the journey, we are now looking at AI to come in the aid of AI.

So we were in conversation with one of the AI -driven companies yesterday, and the way he highlighted back to us, telling that earlier… the total employee base was 10 ,000. Now there is a refinement and optimization by incorporating AI and thereby there is reduction in employee base. But then if we look at the people who are operating in AI, which was 30 now has gone to 3 ,000. So you cut down here and increase over there. So we were trying to tell them that the true power of AI is actually in making sure that AI is not touched with people, human. So reduction in human by upskilling that as appropriately is an important element for us to do. Thank you.

Dr. M P Tangirala

I’ll come to you, Dr. Upadhyay. You did, you know, I know I cut you off or sort of gave you a time pause there. Could you tell us a little bit more about what you were doing, what you’re doing with respect to disaster management, the application that you spoke about?

Dr. Rajkumar Upadhyay

Disaster, yeah. Yeah. So disaster management, as you know, earlier, how did you? It used to happen. Suppose there is a cyclone in Odisha. A mail will go from IMD to chief secretary. Chief secretary will write to district collector. District collector would, in his best way, try to send the cyclone exactly to come. And we used to have thousands of lives lost, property lost. Today, using AI and the sensors, the system what we have done, this is one unified platform where all the alert generating agencies, IMD, CWC, FRI, DGSE, so all alert generating agencies are connected through APIs, auto. All the telecom operators are connected. All the alert dissemination agencies like SDMAs in the states are connected.

So it is all powerful one system. Now there is an alarm, a sensor alarm comes that there is a cyclone likely to, or rain likely. This is automatically read by the system. It prepares the message and finds out what is the geo -targeted area. Because earlier the problem was, they will put these kind of threats but nothing will happen. So people will take next time very casually. But today it is a geo -targeted system. It will alert only to the people who are in that belt. Suppose there is a cyclone hitting Gopalpur in Odisha it will only alert the people who are likely to be affected much before. And it will tell you also you need to evacuate you need to evacuate.

If you need to evacuate what is the arrangement by the government or you need to stay indoors. So all that happens and it was actually presented in parliament. The death in for example I am taking the case of Odisha where thousands of people died in 99. The death is zero. So what happened that after that we implemented because India is a large country and sometime a large population is to be alerted in some other cases. That time SMS gets delayed. You know SMS is a sequential process. SMS is sent by SMSCs. There was a new technology called cell broadcast where you don’t see the messages common. You don’t send through SMS you just broadcast it. So we developed a technology called Cell Broadcast And it was recently used in Cyclone Montha And how do you use AI?

Because now I am getting Inputs from various agencies I federate it My system federates all these information Using AI, builds one Particular message, finds out what is The right area where it is likely to hit And sends only to those people And the beauty of this system is, earlier there was a system Of group SMS, they will find people who are staying there Now even if you are a foreigner You are available at that particular time there It will pick your number and it will Give you the message. So tsunami Is coming, so We don’t know, people may be from here And there at the beach. So this has A very good, and in fact we have Published a paper in ITU, ITU has taken This as a report So this is going Forward, we feel that this particular System will meet The requirement of early warning for all By UN by 2027 And we are already talking to many countries And soon this solution will be Deployed in few countries which is Thank you.

Thank you.

Dr. M P Tangirala

In fact, in your presentation, you also spoke about fraud pro and so on. But I will, in the interest of time, I’ll move to Mr. Gorman about this fraud and scam. Now, you did in your opening remarks talk about the importance of collaboration across sectors. Also, the opportunities of, you know, of engendering innovation through collaboration for controlling or combating scams. Could you elaborate a bit upon that?

Mr. Julian Gorman

Sure. Thanks. Thanks for your question. I think this builds on actually the last couple of comments. I mean, what we’re talking about here is sharing data between multi -parties through standardized interfaces and then using AI or something or other to produce a good outcome. And all these things are innovations. They’re on the leading edge of something. If I start with the first thing, data sharing through standardized interfaces. standard interfaces, you know, GSMA has open gateway APIs program and that is contributing to providing data points which can be used in assessing risk for transactions. There’s other data that can be shared that could help address scams earlier in the cycle. Example, there’s lots of other data points to do and that’s the proof of concept that GSMA is working on in Southeast Asia, sharing data.

The challenge with doing that is you’re at the borders of regulatory compliance. You’re talking about private information or personal information or maybe not. There’s sometimes debate. But to be effective, you’re talking about being able to measure the risk on a particular individual user by sharing information across multiple parties. That requires some regulatory support, sandboxes or other activities to help find, to develop the innovation that finds the solutions that help combat scams. I think one of the things we need to focus on in industry is how do we create that nurturing environment that permits exploration of data sharing in a privacy -enhanced way. There’s lots of nice new technologies that have the impact while complying with the regulations and the privacy we want to maintain.

But ultimately, from a mobile operator point of view, I would say there’s four pillars in this combating scam thing. The one is the network, making sure the network cannot be manipulated in favor of the scammers. And that’s by CLI spoofing, all that sort of stuff. Let’s cut that out. If you introduce AI, there’s other things you can do on top. The second is what can mobile operators expose to the ecosystem so that the ecosystem can measure and respond to risk? Open gateway APIs is one thing. The POC I talked about before is another. And there may be other things. The third is what can mobile operators provide as services to their customers? in the same way the physical environment you can provide hard hats and things like that there’s things you can help customers and they can choose to acquire or choose to use them of their own choice to help protect them online and the fourth thing is digital skills digital skills historically we’ve considered is a destination in actual fact we now know we’re never going to hit that final point skills are going to continue to adopt and to adapt and it’s critical that we focus on all four pillars and that from a regulatory point of view and ecosystem point of view we’re collaborating so that the data can flow we can try and test things and we overcome the prejudice that may be stopping innovation because there’s an expectation that you can’t do these things so it requires policy makers regulatory to sponsor to nurture these things I mean I can guarantee I work into 90 % of mobile operators in Asia Pacific and I start a sentence with I want to suggest we use consumer data for I won’t get past halfway through the sentence they’ll say nope you can’t do that But in actual fact, if we want to be successful, no single entity, especially no mobile operator, has all the information.

I mean, if a mobile operator arbitrarily starts turning off SIM cards because they think maybe that traffic looks a bit dubious, I mean, you’ve only got to look at the Optus outage in Australia where three or four people died because they couldn’t call emergency services. You don’t want to be taking that action. It requires collaboration, regulatory support and policy support.

Dr. M P Tangirala

Yeah, thank you. Thank you. Network, ecosystem, hard hats and upskilling. I think that’s a good way to end the discussion here on the panel. But we have time for one question. Yes, Mr. Jha. We have less than two minutes.

Anil Kumar Jha

Thank you. Very quick, very brief. The question from Mr. Julian Gorman. as we have said we are under attack may we attack anytime. We have also said that we should align with the global trends in order to combat these fraud and all those things. You have heard our panelists who are the icons in their field of manufacturing and standardization and PSPs. Could you suggest two steps that global leaders should take to align the world with themselves and two steps that India should take to align with the world. Thank you.

Mr. Julian Gorman

I mean two steps globally. So the proof of concept we are trying to do in Southeast Asia is actually prove that data can be shared domestically but across borders also in a safe secure way and has impact on controlling scams. One thing we need to remember with scams all we are doing by taking action against scams is increasing the cost of the business case and if we increase the cost of the business case here then another area becomes more favorable and that could be just different types of scams or it could be different locations and so that leads to the what do we need to do globally. We need to act across borders. We need to act as a collective global community.

GSMA has a program called United Against Scams there will be a lot of things about that in Barcelona but India is obviously taking great action domestically or taking steps domestically sharing that knowledge across borders and being able to share that data across borders is important and so I would leave it at those two points

Dr. M P Tangirala

Thank you, it also gives us pause for thought, maybe as regulators we also need to look at collaborating efforts across regulators because there are again sectoral issues that we need to do and so with that we are now at the end of the session, I would request the audience to give a big round of applause to my panelists who have given us very good insight into the topic at hand and thank you so much

Moderator

Thank you moderator sir and all our distinguished panelists for such a vibrant discussion on usage of responsible AI the standards, the repository, various government app for enhancing consumer experience. Your insights will greatly benefit the overall digital ecosystem. Now I would request Dr. M .P. Tangirala to present mementos to our distinguished speakers as a token of appreciation. First to Mr. Julian Gorman. To Dr. Rajkumar Upadhaya. To Mr. Mathan Babu. To Mr. S .T. Abbas. Now I invite Sri A .K. Jha, Principal Advisor, TRAI to present a memento to the moderator of this session, Dr. M .P. Tangirala as a token of appreciation for moderating such a productive session Thank you so much, sir Now I take this opportunity to invite all the speakers for a group photograph I once again would request Chairman, sir, M .P.

Tangirala, Secretary, sir and all the Principal Advisors to please join the session speakers of this panel for a group photograph Thank you give a huge round of applause to all the panelists for joining us. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (9)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“The panel included the Technology Security and Data Privacy Officer of Vodafone India and Senior DDG and Head of TEC, Mr S T Abbas.”

The knowledge base lists both the Technology Security and Data Privacy Officer at Vodafone India and S.T. Abbas as Senior DDG and Head TEC, confirming their presence on the panel [S2] and [S47].

Confirmedmedium

“Tangirala said customers rarely interact with AI models directly but experience outcomes in outage management, service continuity and grievance handling.”

S1 explicitly mentions that AI automation impacts outage management and service continuity, supporting Tangirala’s framing of customer experience [S1].

Confirmedmedium

“Tangirala called for a human‑in‑the‑loop to ensure decision integrity and prevent unchecked autonomous decisions.”

S1 quotes the need for human control in automated systems, and S19 discusses the challenges and importance of maintaining a human-in-the-loop role, confirming the emphasis on human oversight [S1] and [S19].

Additional Contextmedium

“GSMA proposes regulatory sandboxes that enable privacy‑enhanced data sharing.”

S54 describes sandboxes for data governance and notes META’s support for a harmonised, privacy-enhanced approach, providing additional context to GSMA’s sandbox proposal [S54].

Confirmedmedium

“The anti‑scam approach is built on four pillars: securing the network, exposing risk data via open APIs, providing hard‑hat services to customers, and continuously upskilling digital skills.”

S79 enumerates the four pillars as network, ecosystem (risk data exposure), hard-hat services, and upskilling, aligning with the report’s description [S79].

Confirmedhigh

“Dr Rajkumar Upadhyay is the CEO of the Centre for Development of Telematics (CDOT) and an expert in telecommunications, quantum communication and cybersecurity.”

S4 identifies Dr Rajkumar Upadhyay as CEO of CDOT and highlights his expertise in telecom, quantum communication, and cybersecurity, confirming the report’s biographical claim [S4].

External Sources (80)
S1
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — – Mathan Babu Kasilingam- Syed Tausif Abbas – Syed Tausif Abbas- Mathan Babu Kasilingam
S2
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — Speakers:Mathan Babu Kasilingam, Syed Tausif Abbas Speakers:Syed Tausif Abbas Speakers:Syed Tausif Abbas, Mathan Babu …
S3
WSIS Prizes 2025 Winner’s Ceremony — – **Rajkumar Upadhyay** – Dr., Representative from Centre for Development of Telematics, India India’s AI and Facial Re…
S4
IndoGerman AI Collaboration Driving Economic Development and Soc — -Dr. Rajkumar Upadhyay- CEO of Center for Development of Telematics (CDOT), expert in telecommunications, quantum commun…
S5
Fireside Chat The Future of AI & STEM Education in India — Welcome to the panel, sir. Let me now invite Dr. Raj Kumar, Founding Vice -Chancellor at O .P. Jindal University. Dr. Ra…
S6
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — -Mr. Julian Gorman: Representative from GSMA, expert in telecom industry collaboration and anti-scam initiatives across …
S7
Building Indias Digital and Industrial Future with AI — -Julian Gorman- Head of APAC GSMA
S8
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — -Anil Kumar Jha: Principal Advisor, TRAI (Telecom Regulatory Authority of India)
S9
Open Forum #36 Challenges &amp; Opportunities for a Multilingual Internet — – Anil Kumar-Jain: Chair of USG in ICANN Audience: Anil Kumar-Jain, for the record. I am chair of USG in ICANN. I’m …
S10
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S11
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S12
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S13
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — – Dr. M P Tangirala- Mathan Babu Kasilingam – Mathan Babu Kasilingam- Dr. M P Tangirala – Mr. Julian Gorman- Dr. Rajku…
S14
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — Speakers:Dr. M P Tangirala, Mathan Babu Kasilingam Speakers:Dr. Rajkumar Upadhyay, Mathan Babu Kasilingam Speakers:Mat…
S15
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — – Dr. M P Tangirala- Mathan Babu Kasilingam – Mathan Babu Kasilingam- Dr. M P Tangirala – Mr. Julian Gorman- Dr. Rajku…
S16
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — Speakers:Dr. M P Tangirala, Mathan Babu Kasilingam Speakers:Mathan Babu Kasilingam, Dr. M P Tangirala Speakers:Mr. Jul…
S17
Leaders TalkX: Towards a safer connected world: collaborative strategies to strengthen digital trust and cyber resilience — These key comments fundamentally shaped the discussion by elevating it from a technical cybersecurity conversation to a …
S18
Leaders TalkX: Accelerating global access to information and knowledge in the digital era — These key comments fundamentally shaped the discussion by establishing a human rights framework, introducing innovative …
S19
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S20
Science AI & Innovation_ India–Japan Collaboration Showcase — yeah i think uh two perspectives uh One is in our solutioning, when we, and I’m going to take a live example, when we ac…
S21
The Agent Universe From Automation to Autonomy — Thank you Prashant. So like I was mentioning in my earlier part of the response best of the best have to be coupled toge…
S22
https://dig.watch/event/india-ai-impact-summit-2026/building-the-future-stpi-global-partnerships-startup-felicitation-2026 — In areas like textiles, pharmaceuticals, etc. The question now is, how do we reliably move from ideas to impact and be m…
S23
WS #255 AI and disinformation: Safeguarding Elections — Babu Ram Aryal: Not really. It’s a similar kind of context in Nepal as well. We had election in 2022, just two years a…
S24
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Vishal Anand Kanvaty from the National Payments Corporation of India (NPCI) offered insights into implementing AI in hig…
S25
The Foundation of AI Democratizing Compute Data Infrastructure — Wonderful question. Thank you. So there’s going to be another AI revolution, right? We’ve seen in recent years the deep …
S26
WS #208 Democratising Access to AI with Open Source LLMs — Melissa Muñoz Suro: So basically, building on what I was mentioning earlier about our national AI strategy back in the D…
S27
India unveils AI incident reporting guidelines for critical infrastructure — India isdevelopingAI incident reporting guidelines for companies, developers, and public institutions to report AI-relat…
S28
US warns of rising senior health fraud as AI lifts scam sophistication — AI-driven fraud schemes areon the riseacross the US health system, exposing older adults to increasing financial and per…
S29
Spot the red flags of AI-enabled scams, says California DFPI — The California Department of Financial Protection & Innovation (DFPI) has warned that criminals are weaponising AI tosca…
S30
How Trust and Safety Drive Innovation and Sustainable Growth — No, very much so. I mean, the data protection laws apply across the board wherever technology touches personal data. So …
S31
Foreword — – i. To achieve digital transformation, policy and regulation should be more holistic. Cross-sectoral collaboration alon…
S32
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — Shri Anil Kumar Lahoti (TRAI Chairman) This comment shifted the discussion from purely technical considerations to ethi…
S33
Trusted Connections_ Ethical AI in Telecom & 6G Networks — Evidence:In July 2023, TRI issued recommendations on leveraging artificial intelligence and big data in the telecommunic…
S34
High-level AI Standards panel — This comment elevated the discussion from general calls for collaboration to specific requirements for how that collabor…
S35
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Minister Babu acknowledges the infrastructure challenges raised by the technical experts and commits to providing the ne…
S36
AI Without the Cost Rethinking Intelligence for a Constrained World — This comment reframes the entire AI infrastructure discussion by suggesting the industry has abandoned fundamental engin…
S37
National Disaster Management Authority — Pankaj Shukla from Google Cloud articulated a comprehensive five-layer AI architecture spanning infrastructure, operatin…
S38
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S39
AI Meets Agriculture Building Food Security and Climate Resilien — Disagreement level:Low to moderate disagreement level with significant implications for AI governance in agriculture. Th…
S40
AI Meets Agriculture Building Food Security and Climate Resilien — Low to moderate disagreement level with significant implications for AI governance in agriculture. The differences in ap…
S41
The Future of Public Safety AI-Powered Citizen-Centric Policing in India — Kumar argues that effective AI implementation must find the right balance between automation and human oversight. Comple…
S42
From India to the Global South_ Advancing Social Impact with AI — Low level of disagreement with high convergence on AI’s transformative potential. Differences are primarily tactical rat…
S43
From India to the Global South_ Advancing Social Impact with AI — Disagreement level:Low level of disagreement with high convergence on AI’s transformative potential. Differences are pri…
S44
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Very high level of consensus with no significant disagreements identified. This strong alignment suggests effective coor…
S45
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Consensus level:Very high level of consensus with no significant disagreements identified. This strong alignment suggest…
S46
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — Discussion point:Data sharing for scam prevention vs privacy protection
S47
https://dig.watch/event/india-ai-impact-summit-2026/ai-automation-in-telecom_-ensuring-accountability-and-public-trust-india-ai-impact-summit-2026 — Sure. Thanks. Thanks for your question. I think this builds on actually the last couple of comments. I mean, what we’re …
S48
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — This comment draws a historical parallel between the evolution of computer security and the current state of AI governan…
S49
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — An interesting observation is the need for a balance between voluntary standards and legal frameworks. Complementarity i…
S50
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S51
Nri Collaborative Session Data Governance for the Public Good Through Local Solutions to Global Challenges — Ahmed emphasizes that effective data governance requires collaboration across all stakeholder groups. He highlights the …
S52
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — International cooperation and knowledge sharing are essential, requiring interoperable governance frameworks and multi-s…
S53
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S54
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — By fostering interaction, investigation, and the exchange of ideas, sandboxes serve as a stepping stone towards implemen…
S55
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/1/OEWG 2025 — Kazakhstan: Thank you, Chair, for giving the floor. Kazakhstan reaffirms its commitment to strengthening cyber norm im…
S56
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — Building trust with stakeholders is crucial for effective data governance. Regulatory sandboxes, in and of themselves, f…
S57
Data free flow with trust: a collaborative path to progress (ICC) — Certain nations, like China, exhibit a strong inclination towards limiting data sharing to domestic companies. This pers…
S58
UNITED NATIONS CONFERENCE ON TRADE AND DEVELOPMENT — Against this background, major dilemmas emerge between different policy objectives at the national level, and…
S59
The Challenges of Data Governance in a Multilateral World — An advocate in the discussion strongly supports data governance models that prioritize cooperation, privacy, and the com…
S60
Connecting open code with policymakers to development | IGF 2023 WS #500 — Efficient policy measures and rules are necessary to govern data usage while preserving privacy. GDPR mandates user cons…
S61
AI as critical infrastructure for continuity in public services — first definitely not technology because I think we’ve seen technology is always almost ahead very true over the last cou…
S62
MahaAI Building Safe Secure & Smart Governance — This involves creating unified citizen databases and breaking down departmental silos to improve service delivery
S63
Collaborative AI Network – Strengthening Skills Research and Innovation — Centralized procurement and shared services can overcome implementation barriers and prevent fragmented solutions Data …
S64
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — Shri Anil Kumar Lahoti (TRAI Chairman) This comment shifted the discussion from purely technical considerations to ethi…
S65
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — “So one of the key application, key product what we have developed is Fraud Pro”[41]. “We are able to today identify fra…
S66
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — The platform uses fuzzy logic and AI to help users identify all mobile connections registered under their name by provid…
S67
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — Lucien Taylor: Thank you very much, Keith. I just wanted to say, I think we’re gonna do our little speeches and then the…
S68
Setting the Rules_ Global AI Standards for Growth and Governance — Evidence:Specific examples of testing methodologies, transparency/disclosure standards, and incident reporting/monitorin…
S69
AI Meets Cybersecurity Trust Governance &amp; Global Security — Building trust through transparency, incident reporting and standards
S70
Building Population-Scale Digital Public Infrastructure for AI — Summary:The discussion reveals subtle but important disagreements about implementation approaches rather than fundamenta…
S71
AI as critical infrastructure for continuity in public services — Data silos emerged as a primary barrier, with organizations struggling to integrate data across different systems and de…
S72
https://dig.watch/event/india-ai-impact-summit-2026/ai-automation-in-telecom_-ensuring-accountability-and-public-trust-india-ai-impact-summit-2026 — Disaster, yeah. Yeah. So disaster management, as you know, earlier, how did you? It used to happen. Suppose there is a c…
S73
Internet standards and human rights | IGF 2023 WS #460 — Challenges faced at standard forums were discussed, and there was an emphasis on finding ways to overcome these challeng…
S74
https://dig.watch/event/india-ai-impact-summit-2026/designing-indias-digital-future-ai-at-the-core-6g-at-the-edge — Mr. Ashok. Thank you, Mr. Ashok. Thank you, Mr. Ashok. So it’s Thank you, sir. So now we are moving to our very next se…
S75
Toward Collective Action_ Roundtable on Safe & Trusted AI — Professor Shock warns against the Silicon Valley approach of ‘move fast and break things’ when implementing AI in infras…
S76
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — This comment is exceptionally thought-provoking because it addresses the critical tension between AI efficiency and publ…
S77
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — ATSUSHI YAMANAKA:Thank you, Joseph. Thank you, John Philbert. And he actually had a great, actually. insight into the ba…
S78
Open Forum #48 The International Counter Ransomware Initiative — The initiative operates through four main pillars:
S79
https://app.faicon.ai/ai-impact-summit-2026/ai-automation-in-telecom_-ensuring-accountability-and-public-trust-india-ai-impact-summit-2026 — Yeah, thank you. Thank you. Network, ecosystem, hard hats and upskilling. I think that’s a good way to end the discussio…
S80
Responsible AI for Shared Prosperity — “So we specifically focus on four pillars of work, which is around data”[5].
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Moderator
1 argument48 words per minute290 words355 seconds
Argument 1
Trust framing (Moderator)
EXPLANATION
The moderator introduced the session by emphasizing the need to balance innovation with privacy and trust, setting the tone for a discussion on building customer confidence in AI‑driven telecom services.
EVIDENCE
The moderator asked participants to engage deeply on balancing information and explicitly linked innovation with privacy and trust, stating “Let’s engage deeply on how to balance information. Innovation with privacy and trust.” [5-6]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The moderator’s emphasis on balancing innovation with privacy and trust is documented in the framing notes of the session [S10] and aligns with broader discussions on digital trust and cyber resilience [S17].
MAJOR DISCUSSION POINT
Trust framing
D
Dr. M P Tangirala
1 argument115 words per minute929 words482 seconds
Argument 1
Human‑in‑the‑loop & communication (Dr. M P Tangirala)
EXPLANATION
Dr. Tangirala highlighted that while AI can automate many telecom operations, human oversight remains essential to prevent unintended decisions, and clear proactive communication with customers is crucial for maintaining trust.
EVIDENCE
He noted that customers are affected by AI outcomes and that “clear and proactive communication with the customers would become very important” while also stressing the need for “human in the loop” to ensure the system does not run away with its own decisions. [13-16][19-21]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for human oversight and clear communication echoes concerns about fading human agency in automated systems [S19], recommendations for human-in-the-loop thresholds [S21], and examples of equity algorithms that require human approval [S20].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop and customer communication
AGREED WITH
Mr. Julian Gorman, Syed Tausif Abbas, Mathan Babu Kasilingam
DISAGREED WITH
Mathan Babu Kasilingam
M
Mathan Babu Kasilingam
4 arguments159 words per minute1696 words637 seconds
Argument 1
Privacy‑by‑design certification & trust pillars (Mathan Babu Kasilingam)
EXPLANATION
Kasilingam explained that their AI adoption is grounded in privacy‑by‑design principles, backed by certifications such as ISO 27701, to ensure reliability, responsibility and trust for customers.
EVIDENCE
He listed the trust pillars-reasonability, reliability, trust, privacy-and cited that the company is “completely certified on PIMS ISO 27701” and “the only TSP in the country who have governed privacy by design and have certified ourselves against that.” [114-119]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The organization’s privacy-by-design approach and ISO 27701 certification are described in the AI automation telecom summit report [S2].
MAJOR DISCUSSION POINT
Privacy‑by‑design and trust pillars
AGREED WITH
Syed Tausif Abbas, Mr. Julian Gorman
DISAGREED WITH
Julian Gorman
Argument 2
Quick‑win AI fraud detection & need for unified data platform (Mathan Babu Kasilingam)
EXPLANATION
He described how early AI projects delivered quick wins in fraud detection, but warned that fragmented data silos and duplicated infrastructure limit scalability, prompting a shift toward a single, secure data repository and shared AI platform.
EVIDENCE
Kasilingam mentioned “we are able to today identify fraud” as a quick win, then detailed the problem of “individual siloed repository of data” and the plan to create “one single repository of this data” and a common AI infrastructure. [139-146][148-166]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The quick-win fraud detection results and the push to replace fragmented data silos with a single secure repository are detailed in the telecom AI summit findings [S2].
MAJOR DISCUSSION POINT
Quick wins and unified data platform
AGREED WITH
Dr. Rajkumar Upadhyay, Mr. Julian Gorman
Argument 3
Data silos, massive infrastructure expense, and central LLM strategy (Mathan Babu Kasilingam)
EXPLANATION
He expanded on the challenges of AI adoption, noting that massive GPU‑based infrastructure and isolated data silos drive high costs, and advocated for a central large‑language‑model (LLM) platform to serve multiple business functions efficiently.
EVIDENCE
He highlighted “mammoth amount of infrastructure” and “siloed data” and then described a strategy to “build a comprehensive central LLM” that can be accessed via enterprise APIs, reducing duplication and easing security. [148-166][170-178]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The high GPU-based infrastructure costs and siloed data challenges are quantified in the summit report [S2] and further discussed in the analysis of AI compute expenses [S25]; the recommendation for a central LLM platform aligns with open-source LLM and data-sovereignty strategies [S26].
MAJOR DISCUSSION POINT
Infrastructure cost and central LLM strategy
Argument 4
Adoption of incident‑reporting standard to scale and refine AI models (Mathan Babu Kasilingam)
EXPLANATION
Kasilingam argued that recording AI incidents using a standardized schema helps telecom operators refine and scale their AI models, especially as India rolls out home‑grown large language models.
EVIDENCE
He stated that “this record keeping will make the ability for us to scale our AI and models as appropriately” and linked it to India’s upcoming LLMs, indicating the standard will support model refinement. [210-219]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s development of AI incident-reporting guidelines for critical infrastructure directly supports the proposed standard [S27].
MAJOR DISCUSSION POINT
Standard adoption for model scaling
AGREED WITH
Mr. Julian Gorman, Syed Tausif Abbas, Dr. M P Tangirala
D
Dr. Rajkumar Upadhyay
3 arguments171 words per minute1925 words671 seconds
Argument 1
AI‑based fraud & protection tools for customers (Dr. Rajkumar Upadhyay)
EXPLANATION
Dr. Upadhyay presented several AI‑driven solutions—Fraud Pro, Chakshu, and the Sanchar Sati app—that detect fraudulent SIM connections, enable crowdsourced reporting, and empower users to disconnect unauthorized numbers, thereby protecting customers.
EVIDENCE
He described Fraud Pro’s ability to “detect the fraudulent connections” by matching images and demographics, Chakshu’s crowdsourced reporting, and the Sanchar Sati app’s “18 million plus downloads” and “70 lakh connections have been disconnected using this.” [65-71][86-92][101-102]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Fraud Pro system, Chakshu crowdsourcing, and the Sanchar Sati app are described in the AI automation telecom summit report [S1] and highlighted in the WSIS prize citation [S3]; broader fraud-detection principles are discussed in the responsible AI leadership paper [S24].
MAJOR DISCUSSION POINT
AI‑driven fraud protection tools
AGREED WITH
Mathan Babu Kasilingam, Mr. Julian Gorman
Argument 2
AI‑driven Fraud Pro, Chakshu and mass disconnection of fraudulent SIMs (Dr. Rajkumar Upadhyay)
EXPLANATION
He gave concrete figures showing how AI tools have been used to dismantle SIM factories and disconnect millions of fraudulent connections, illustrating the scale and impact of AI in fraud mitigation.
EVIDENCE
He referenced the Jamtara and Mewat cases, noted that “2.1 million numbers were disconnected using AI-based tracking” earlier, and later cited “70 lakh connections have been disconnected using this” platform. [65-71][88-92][101-102]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence of large-scale SIM disconnections using AI-driven tools is provided in the summit report [S1] and the WSIS prize documentation [S3]; the underlying fraud-detection methodology is further elaborated in [S24].
MAJOR DISCUSSION POINT
Mass disconnection of fraudulent SIMs
Argument 3
Unified AI platform for geo‑targeted early warnings and cell‑broadcast alerts (Dr. Rajkumar Upadhyay)
EXPLANATION
He explained a unified AI system that aggregates data from multiple agencies (IMD, CWC, etc.) to generate geo‑targeted early warnings via cell‑broadcast, dramatically reducing casualties in disasters such as cyclones.
EVIDENCE
He described how the platform “connects all the alert generating agencies… All the telecom operators are connected… The system prepares the message, finds the geo-targeted area and uses cell broadcast” and cited the zero-death outcome after implementing it for Odisha cyclones. [250-274][265-266]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The integrated AI platform that aggregates data from multiple agencies to issue geo-targeted cell-broadcast alerts and its impact on cyclone response are described in the telecom AI summit report [S2].
MAJOR DISCUSSION POINT
AI‑enabled disaster early warning
AGREED WITH
Mathan Babu Kasilingam, Mr. Julian Gorman
M
Mr. Julian Gorman
4 arguments158 words per minute1349 words510 seconds
Argument 1
Outcome‑focused regulation to preserve trust (Mr. Julian Gorman)
EXPLANATION
Gorman argued that regulation should target outcomes rather than prescribe specific service‑based rules, ensuring that anti‑scam measures do not stifle future innovation while maintaining customer trust.
EVIDENCE
He warned that “the danger… of implementing service-based rules is they restrict innovation” and advocated for “focus on outcomes when it comes to regulation.” [38-40]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The argument for outcome-based regulation mirrors the perspective that regulation should support trust while fostering innovation, as discussed in the trust and safety analysis [S30] and the holistic policy foreword [S31].
MAJOR DISCUSSION POINT
Outcome‑focused regulation
Argument 2
Cross‑sector task force & data‑sharing for scam mitigation (Mr. Julian Gorman)
EXPLANATION
He described the GSMA‑led Cross‑Sector Any Scam Task Force, which brings together over 39 organisations across 17 countries to share data and develop joint initiatives against scams.
EVIDENCE
He noted the formation of the task force, its membership (Meta, Google, TikTok, AWS), and that “we’ve gathered case studies of more than 40 instances where operators… developed… strategies to combat scam.” [31-36]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The GSMA cross-sector Any Scam Task Force aligns with recommendations for cross-sectoral collaboration in digital governance [S31] and broader collaborative strategies for digital trust [S17].
MAJOR DISCUSSION POINT
Cross‑sector collaboration
AGREED WITH
Mathan Babu Kasilingam, Dr. Rajkumar Upadhyay
Argument 3
Need for regulatory sandboxes and privacy‑enhanced data sharing (Mr. Julian Gorman)
EXPLANATION
Gorman highlighted that effective data sharing for scam prevention must operate within regulatory sandboxes that protect privacy, enabling innovation while complying with data protection rules.
EVIDENCE
He mentioned the challenge of “borders of regulatory compliance” and called for “regulatory support, sandboxes or other activities” to enable privacy-enhanced data sharing. [291-298]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for regulatory sandboxes to enable privacy-enhanced data sharing is discussed in the trust and safety regulation paper [S30] and reinforced by the collaborative regulation framework in the policy foreword [S31].
MAJOR DISCUSSION POINT
Regulatory sandboxes for data sharing
AGREED WITH
Mathan Babu Kasilingam, Syed Tausif Abbas
Argument 4
Global cross‑border coordination and “United Against Scams” initiative (Mr. Julian Gorman)
EXPLANATION
He emphasized that combating scams requires coordinated global action, referencing India’s emerging leadership role and the GSMA’s “United Against Scams” program as mechanisms for cross‑border cooperation.
EVIDENCE
He stated that “India is a real telecom superpower… it cannot just be worried about its domestic situation” and later said “we need to act across borders… GSMA has a program called United Against Scams.” [46-53][324-326]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on worldwide coordination reflects the global digital trust and cross-border cooperation themes presented in the leaders talk on global access and governance [S17] and the holistic policy recommendations for international collaboration [S31].
MAJOR DISCUSSION POINT
Global coordination against scams
S
Syed Tausif Abbas
2 arguments148 words per minute552 words223 seconds
Argument 1
AI incident‑reporting schema to capture fraud incidents (Syed Tausif Abbas)
EXPLANATION
Abbas outlined a proposed AI incident‑reporting database with a detailed taxonomy (including incident type, severity, affected system, cause) to enable systematic recording and analysis of AI‑related failures.
EVIDENCE
He listed the 30 key fields such as “name of application, technology used, impact, incident severity, cause of failure” and explained how the schema would let providers analyze and rectify AI incidents. [193-199]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The detailed taxonomy for AI incident reporting corresponds to India’s AI incident-reporting guidelines for critical infrastructure [S27].
MAJOR DISCUSSION POINT
Incident‑reporting schema
AGREED WITH
Mr. Julian Gorman, Dr. M P Tangirala, Mathan Babu Kasilingam
DISAGREED WITH
Julian Gorman
Argument 2
Voluntary AI incident‑reporting database with taxonomy and schema (Syed Tausif Abbas)
EXPLANATION
He clarified that the reporting framework is voluntary, intended to help service providers improve AI reliability and give regulators data for policy making, similar to early computer incident response mechanisms.
EVIDENCE
He noted that “it is not mandatory… it is voluntary” and compared it to the early computer emergency response team, arguing that voluntary reporting will “help the regulator and policy makers to go about the AI policy.” [193-199][194-196]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The voluntary nature of the reporting framework aligns with the approach outlined in the AI incident-reporting guidelines for critical infrastructure [S27].
MAJOR DISCUSSION POINT
Voluntary reporting framework
AGREED WITH
Mathan Babu Kasilingam, Mr. Julian Gorman
A
Anil Kumar Jha
1 argument180 words per minute94 words31 seconds
Argument 1
Call for concrete steps by global leaders and India to align with worldwide efforts (Anil Kumar Jha)
EXPLANATION
Jha asked the panel to propose two actionable steps for global leaders and two for India to harmonise anti‑scam and AI governance efforts internationally.
EVIDENCE
He phrased the request: “Could you suggest two steps that global leaders should take to align the world with themselves and two steps that India should take to align with the world.” [315-320]
MAJOR DISCUSSION POINT
Request for concrete alignment steps
Agreements
Agreement Points
AI is a powerful tool for fraud detection and scam mitigation, delivering large‑scale customer protection while requiring responsible oversight.
Speakers: Dr. Rajkumar Upadhyay, Mathan Babu Kasilingam, Mr. Julian Gorman
AI‑based fraud & protection tools for customers (Dr. Rajkumar Upadhyay) Quick‑win AI fraud detection & need for unified data platform (Mathan Babu Kasilingam) Cross‑sector task force & data‑sharing for scam mitigation (Mr. Julian Gorman)
All three speakers highlighted AI-driven solutions that identify and block fraudulent connections or scam activities at massive scale – Upadhyay described Fraud Pro, Chakshu and the Sanchar Sati app disconnecting millions of numbers [65-71][86-92][101-102]; Kasilingam pointed to early “quick-win” fraud-detection projects and the need to scale them responsibly [139-140]; Gorman stressed the industry-wide effort to gather case studies and develop strategies to combat scams, noting that regulation must not stifle such innovation [26-31][38-40].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between leveraging AI for scam prevention and safeguarding privacy has been highlighted in discussions on data sharing for fraud mitigation versus privacy protection at the India AI Impact Summit 2026 [S46].
Collaboration and data sharing across the ecosystem are essential for effective AI governance and incident management.
Speakers: Mr. Julian Gorman, Syed Tausif Abbas, Dr. M P Tangirala, Mathan Babu Kasilingam
Cross‑sector task force & data‑sharing for scam mitigation (Mr. Julian Gorman) AI incident‑reporting schema to capture fraud incidents (Syed Tausif Abbas) Human‑in‑the‑loop & communication (Dr. M P Tangirala) Adoption of incident‑reporting standard to scale and refine AI models (Mathan Babu Kasilingam)
Gorman described a cross-sector task force that pools data from operators and platforms to fight scams [31-36]; Abbas proposed a voluntary AI-incident reporting database with a detailed taxonomy to enable systematic analysis [193-199]; Tangirala praised collaborative innovation and noted the need for clear communication with customers [54-56]; Kasilingam added that standardized incident recording will help scale AI models and support regulators [210-219].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder collaboration is repeatedly emphasized as a cornerstone of AI governance, from the NRI data-governance session stressing cross-sector cooperation [S51] to broader calls for interoperable frameworks in agriculture AI initiatives [S52].
Privacy‑by‑design and voluntary, standards‑based approaches are preferred over prescriptive regulation for AI deployment.
Speakers: Mathan Babu Kasilingam, Syed Tausif Abbas, Mr. Julian Gorman
Privacy‑by‑design certification & trust pillars (Mathan Babu Kasilingam) Voluntary AI incident‑reporting database with taxonomy and schema (Syed Tausif Abbas) Need for regulatory sandboxes and privacy‑enhanced data sharing (Mr. Julian Gorman)
Kasilingam highlighted ISO 27701 certification and a privacy-by-design stance to build trust [114-119]; Abbas emphasized that the AI incident-reporting schema is voluntary and aims to aid regulators without imposing mandates [193-199][194-196]; Gorman called for privacy-enhanced data sharing within regulatory sandboxes rather than rigid rules [291-298].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses advocate a balance between voluntary standards and legal frameworks, noting that privacy-by-design can complement, rather than replace, regulation (IGF 2023) [S49], while GDPR exemplifies mandatory consent requirements for personal data [S60].
A unified, centrally managed AI data platform reduces silos, cuts costs and improves service delivery across functions.
Speakers: Mathan Babu Kasilingam, Dr. Rajkumar Upadhyay, Mr. Julian Gorman
Quick‑win AI fraud detection & need for unified data platform (Mathan Babu Kasilingam) Unified AI platform for geo‑targeted early warnings and cell‑broadcast alerts (Dr. Rajkumar Upadhyay) Cross‑sector task force & data‑sharing for scam mitigation (Mr. Julian Gorman)
Kasilingam warned against fragmented data silos and advocated a single repository with a central LLM to serve multiple business lines [148-166]; Upadhyay described a unified disaster-alert platform that aggregates inputs from IMD, CWC and others to issue geo-targeted broadcasts, eliminating fragmented processes [250-256][260-264]; Gorman reinforced the need for standardized APIs to share data across parties efficiently [285-291].
POLICY CONTEXT (KNOWLEDGE BASE)
Initiatives such as MahaAI’s unified citizen database aim to break departmental silos and enhance service delivery, reflecting a trend toward centralized data platforms in public governance [S62]; similar arguments are made for centralized procurement to avoid fragmented solutions [S63].
Similar Viewpoints
Both argue that a voluntary, standardized AI‑incident reporting schema will help operators refine models and give regulators actionable data, reducing future failures [210-219][193-199][194-196].
Speakers: Mathan Babu Kasilingam, Syed Tausif Abbas
Adoption of incident‑reporting standard to scale and refine AI models (Mathan Babu Kasilingam) AI incident‑reporting schema to capture fraud incidents (Syed Tausif Abbas)
Both favour flexible, privacy‑preserving frameworks (sandboxes or voluntary reporting) over mandatory rules to enable innovation while protecting users [291-298][194-196].
Speakers: Mr. Julian Gorman, Syed Tausif Abbas
Need for regulatory sandboxes and privacy‑enhanced data sharing (Mr. Julian Gorman) Voluntary AI incident‑reporting database with taxonomy and schema (Syed Tausif Abbas)
Both stress that consolidating data and AI services into a single, secure platform yields faster, more reliable outcomes—whether for disaster alerts or fraud detection—by avoiding siloed architectures [250-256][260-264][148-166].
Speakers: Dr. Rajkumar Upadhyay, Mathan Babu Kasilingam
Unified AI platform for geo‑targeted early warnings and cell‑broadcast alerts (Dr. Rajkumar Upadhyay) Quick‑win AI fraud detection & need for unified data platform (Mathan Babu Kasilingam)
Unexpected Consensus
Exporting Indian AI solutions globally through coordinated effort
Speakers: Mr. Julian Gorman, Dr. Rajkumar Upadhyay
Cross‑sector task force & data‑sharing for scam mitigation (Mr. Julian Gorman) AI‑based fraud & protection tools for customers (Dr. Rajkumar Upadhyay)
Gorman called for worldwide cooperation and highlighted India’s emerging leadership in global anti-scam initiatives [46-53][324-326]; Upadhyay later stated that India is ready to share its AI-driven fraud-prevention and disaster-management technologies with other countries [135]. This alignment on exporting Indian AI capabilities beyond domestic use was not explicitly anticipated earlier in the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
High-level consensus on coordinated AI exports was recorded at the India AI Impact Summit 2026, where U.S.-India partnerships on AI commercialization were described as having “very high” alignment [S44]; broader regional collaboration is also highlighted as essential for AI’s societal impact [S50].
Overall Assessment

The panel showed strong convergence on four pillars: (1) AI’s central role in large‑scale fraud and scam mitigation; (2) the necessity of ecosystem‑wide collaboration and data sharing; (3) a preference for privacy‑by‑design, voluntary standards and regulatory sandboxes over rigid mandates; and (4) the strategic move toward unified, centrally managed AI data platforms to cut costs and improve service delivery.

High consensus – most speakers reiterated compatible positions, indicating a shared vision for responsible, collaborative, and privacy‑respecting AI deployment in telecoms, which bodes well for coordinated policy action and industry adoption.

Differences
Different Viewpoints
Extent of data sharing for scam mitigation versus privacy‑by‑design internal data consolidation
Speakers: Julian Gorman, Mathan Babu Kasilingam
Cross‑sector task force & data‑sharing for scam mitigation (Julian Gorman) Privacy‑by‑design certification & trust pillars (Mathan Babu Kasilingam)
Gorman argues that combating scams requires extensive cross-sector and cross-border data sharing, citing the GSMA Cross-Sector Any Scam Task Force and the need to act globally [31-36][46-53][324-326]. Kasilingam stresses a privacy-by-design approach, highlighting ISO 27701 certification and the plan to consolidate data into a single, secure repository to simplify security and compliance [114-119][164-166]. The two positions clash on whether data should be openly shared beyond national boundaries or kept within a tightly controlled, privacy-centric environment.
POLICY CONTEXT (KNOWLEDGE BASE)
The India AI Impact Summit 2026 explicitly debated the trade-off between data sharing for scam prevention and privacy-by-design safeguards, underscoring the policy dilemma [S46]; similar concerns appear in UNCTAD analyses of cross-border data flow versus national security priorities [S58].
Whether AI incident‑reporting should be voluntary or supported by regulatory mechanisms
Speakers: Syed Tausif Abbas, Julian Gorman
AI incident‑reporting schema to capture fraud incidents (Syed Tausif Abbas) Regulatory sandboxes for data sharing (Julian Gorman)
Abbas proposes a voluntary AI incident-reporting database with a detailed taxonomy, emphasizing that participation is not mandatory and is intended to help providers improve AI reliability [193-196]. Gorman, while discussing data sharing for scam mitigation, calls for regulatory sandboxes and outcome-focused regulation to enable privacy-enhanced data exchange, implying a more formal, possibly mandatory, regulatory framework [291-298][38-40]. This creates a disagreement on the role of regulation versus voluntary industry self-reporting.
POLICY CONTEXT (KNOWLEDGE BASE)
Historical parallels drawn at the Telecom AI session suggest incident-reporting mechanisms evolve from voluntary to mandatory as risks mature [S48], and IGF discussions stress the need for complementary voluntary standards and legal frameworks [S49].
Role of human oversight versus fully automated AI systems
Speakers: Dr. M P Tangirala, Mathan Babu Kasilingam
Human‑in‑the‑loop & communication (Dr. M P Tangirala) Infrastructure cost and central LLM strategy (Mathan Babu Kasilingam)
Tangirala stresses that AI deployments must retain a human-in-the-loop to prevent unintended decisions and to maintain clear communication with customers [19-21]. Kasilingam, describing the move toward a comprehensive central LLM and noting that “the true power of AI is actually in making sure that AI is not touched with people, human,” advocates reducing human involvement to achieve efficiency and cost savings [235-236][170-178]. The two speakers diverge on the appropriate balance between human control and automation.
POLICY CONTEXT (KNOWLEDGE BASE)
Scholarly debate highlights the necessity of human oversight to avoid over-reliance on algorithms, especially in nuanced contexts such as policing and agriculture AI deployments [S41][S39].
Unexpected Differences
Global data‑sharing push versus a strong national privacy‑by‑design stance
Speakers: Julian Gorman, Mathan Babu Kasilingam
Cross‑sector task force & data‑sharing for scam mitigation (Julian Gorman) Privacy‑by‑design certification & trust pillars (Mathan Babu Kasilingam)
It is surprising that a senior GSMA representative (Gorman) advocates extensive cross‑border data exchange to combat scams, while a leading Indian telecom service provider (Kasilingam) highlights a national privacy‑by‑design approach backed by ISO 27701 certification and a strategy to keep data within a single, secure repository. The tension between global collaboration and stringent national privacy safeguards was not anticipated given their shared industry background.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy tensions between cross-border data flows and domestic privacy protections are documented in the ICC’s “data free flow with trust” report and UNCTAD’s analysis of national security versus data-sharing objectives [S57][S58].
Voluntary incident‑reporting schema versus calls for regulatory sandboxes
Speakers: Syed Tausif Abbas, Julian Gorman
AI incident‑reporting schema to capture fraud incidents (Syed Tausif Abbas) Regulatory sandboxes for data sharing (Julian Gorman)
Abbas’s proposal that the AI incident‑reporting database be entirely voluntary contrasts with Gorman’s emphasis on regulatory sandboxes and outcome‑focused regulation to enable data sharing. The expectation that industry would self‑regulate voluntarily while regulators simultaneously push for structured, sandbox‑enabled mechanisms was not foreseen.
POLICY CONTEXT (KNOWLEDGE BASE)
Regulatory sandboxes are promoted as a bridge between voluntary reporting and formal regulation, fostering dialogue and trust among stakeholders (IGF 2023) [S54][S56], while summit commentary notes the natural progression from voluntary to mandatory reporting [S48].
Overall Assessment

The panel shows broad consensus on the importance of AI for improving telecom services, especially fraud mitigation and disaster response. However, substantive disagreements emerge around data sharing (global vs privacy‑centric), the role of regulation (voluntary reporting vs mandated sandboxes/outcome‑focused rules), and the balance between human oversight and full automation. These divergences reflect differing priorities—innovation speed, privacy protection, and regulatory certainty—among international bodies, national operators, and standard‑setting entities.

Moderate. While participants align on overarching goals (trust, fraud reduction, AI adoption), they diverge on implementation pathways, which could lead to fragmented policies unless a coordinated framework reconciles cross‑border data exchange with privacy‑by‑design principles and clarifies the regulatory stance on incident reporting.

Partial Agreements
All three speakers agree that reducing fraud and scam is a critical goal for telecom operators. Gorman emphasizes cross‑sector collaboration and data sharing to achieve this [31-36][46-53]. Kasilingam points to internal AI‑driven fraud detection tools and the need for a unified data platform to scale those tools [139-146][148-166]. Upadhyay showcases concrete AI applications (Fraud Pro, Chakshu, Sanchar Sati) that have already disconnected millions of fraudulent connections [65-71][86-92][101-102]. While the end‑goal aligns, the pathways—global data sharing, internal platform consolidation, and customer‑facing apps—differ.
Speakers: Julian Gorman, Mathan Babu Kasilingam, Dr. Rajkumar Upadhyay
Cross‑sector task force & data‑sharing for scam mitigation (Julian Gorman) Quick‑win AI fraud detection & need for unified data platform (Mathan Babu Kasilingam) AI‑based fraud & protection tools for customers (Dr. Rajkumar Upadhyay)
These speakers share the objective of building and maintaining customer trust in AI‑driven telecom services. Tangirala calls for human oversight and proactive communication [13-16][19-21]. Kasilingam underlines privacy‑by‑design and certification to assure trust [114-119]. Abbas proposes a structured incident‑reporting schema to systematically learn from AI failures and improve reliability [193-199]. Each proposes a different mechanism—human oversight, privacy frameworks, or systematic reporting—to achieve the same trust‑building goal.
Speakers: Dr. M P Tangirala, Mathan Babu Kasilingam, Syed Tausif Abbas
Human‑in‑the‑loop & communication (Dr. M P Tangirala) Privacy‑by‑design certification & trust pillars (Mathan Babu Kasilingam) AI incident‑reporting schema to capture fraud incidents (Syed Tausif Abbas)
Takeaways
Key takeaways
Customer trust is central to AI‑driven telecom operations; human‑in‑the‑loop and proactive communication are required. Privacy‑by‑design (e.g., ISO 27701 certification) and clear trust pillars are being adopted by service providers. AI‑based fraud and spam prevention tools (Fraud Pro, Chakshu, Sanchar Sati) have disconnected millions of fraudulent SIMs and reduced customer inconvenience. Cross‑sector collaboration and data sharing (GSMA Cross‑Sector Scam Task Force, Open Gateway APIs) are essential to combat sophisticated scams. A unified, non‑siloed AI data platform and central LLM infrastructure are needed to reduce duplication, lower costs, and improve security. A voluntary AI incident‑reporting schema with taxonomy and taxonomy (proposed by TEC) can help operators learn from failures and aid regulators in policy making. AI‑enabled disaster management (geo‑targeted early warnings, cell‑broadcast alerts) demonstrates public‑safety benefits of AI at scale. Infrastructure costs dominate AI expenditures (80‑90%); upskilling and using AI to automate AI development are seen as ways to control expenses. Global coordination (e.g., GSMA “United Against Scams” initiative) and alignment of Indian regulatory actions with international standards are critical for sustainable fraud mitigation.
Resolutions and action items
GSMA to continue proof‑of‑concept data‑sharing pilots in Southeast Asia and develop regulatory sandboxes for privacy‑enhanced cross‑border data exchange. CDOT (Dr. Upadhyay) to promote its AI‑based fraud‑prevention and disaster‑management solutions for adoption by other countries and operators. Service provider (Mathan Babu Kasilingam) to consolidate fragmented data silos into a single AI data repository and build a central LLM platform for enterprise‑wide use. TEC (Syed Tausif Abbas) to publicise the voluntary AI incident‑reporting standard and encourage telecom operators to adopt it for internal learning and regulator reporting. Operators to maintain privacy‑by‑design certifications (ISO 27701) while exposing standardized APIs for ecosystem partners. Stakeholders to explore cost‑optimisation measures for AI infrastructure, including shared compute resources and AI‑assisted skill development.
Unresolved issues
How to achieve widespread adoption (or eventual mandatory status) of the voluntary AI incident‑reporting schema. Specific legal and technical frameworks needed for secure cross‑border data sharing without breaching privacy regulations. Funding models and governance structures for a unified AI infrastructure shared across multiple telecom operators. Detailed strategies for reducing the high capital cost of AI compute (GPU, storage) beyond the suggested centralisation. Quantitative metrics and benchmarks to assess the impact of AI deployments on customer trust and fraud reduction.
Suggested compromises
Adopt the AI incident‑reporting schema on a voluntary basis, offering regulatory insights and model‑refinement benefits to operators while avoiding mandatory imposition. Balance quick‑win, siloed AI pilots with a longer‑term plan to merge data into a single platform, allowing immediate benefits without delaying integration. Implement privacy‑by‑design certifications alongside standardized, privacy‑enhanced APIs, enabling data sharing for scam detection while preserving user privacy.
Thought Provoking Comments
Scammers are not bound by geography or law; regulation cannot move as fast as they do. To combat this we need a cross‑sector coalition and collaborative innovation, with India playing a ‘statesman’ role in the global telecom ecosystem.
Highlights the asymmetry between fast‑moving cyber‑crime and slower regulatory processes, and frames the solution as global, collaborative effort rather than isolated national action.
Shifted the discussion from isolated provider‑level fraud mitigation to a broader, international cooperation agenda. Prompted the moderator to note the collaborative theme and set up later questions about global versus Indian steps.
Speaker: Julian Gorman
AI‑driven fraud detection (Fraud Pro) can deduplicate SIM registrations by matching images, demographics and other identifiers, leading to the disconnection of 70 lakh fraudulent connections and even aiding in disaster scenarios like locating dead bodies.
Provides concrete, large‑scale impact figures and demonstrates AI’s multifaceted utility beyond telecom—spanning security, public safety, and disaster response.
Introduced a tangible success story that grounded the abstract trust discussion, leading other panelists (e.g., Mathan Babu) to reference these outcomes when talking about AI’s value proposition and scalability.
Speaker: Dr. Rajkumar Upadhyay
Our AI journey suffered from siloed data and infrastructure; we are now consolidating into a single, privacy‑by‑design repository and building purpose‑built large language models to serve multiple functions across the enterprise.
Diagnoses a common enterprise AI pitfall (data silos) and proposes a strategic shift toward unified platforms and LLMs, linking technical architecture with trust and privacy concerns.
Redirected the conversation toward operational challenges of AI deployment, prompting follow‑up questions about cost, infrastructure, and the relevance of the new AI incident‑reporting standard.
Speaker: Mathan Babu Kasilingam
We have drafted a voluntary AI incident‑reporting schema with 30 fields, taxonomy, and severity levels, enabling service providers to log and analyse AI‑related failures, much like the early computer emergency response teams.
Introduces a novel governance tool that could standardise how AI mishaps are recorded and mitigated, filling a gap in current telecom regulation.
Sparked a dialogue on the value of voluntary standards, leading Mathan Babu to acknowledge its role in scaling AI models and the moderator to probe its benefits for providers and regulators.
Speaker: Syed Tausif Abbas
Combating scams requires four pillars: securing the network, exposing risk data via open APIs, offering protective services to customers (like ‘hard hats’), and continuously upskilling digital skills.
Synthesises the multifaceted approach needed, linking technical, ecosystem, consumer‑facing, and human‑capacity dimensions, and warns against over‑blocking (citing the Optus outage).
Provided a clear framework that guided the final part of the discussion, influencing the moderator’s summarisation and the audience’s understanding of actionable steps.
Speaker: Julian Gorman
Human‑in‑the‑loop is essential; AI must not run away with decisions, and responsibility for decision integrity stays with the telecom provider, requiring proactive communication with customers.
Sets the ethical baseline for the entire panel, emphasizing accountability and transparency as prerequisites for trust.
Framed all subsequent contributions around the need for oversight and communication, influencing how panelists presented their AI use‑cases (e.g., emphasizing privacy‑by‑design, incident reporting).
Speaker: Dr. M P Tangirala (opening remarks)
Overall Assessment

The discussion was steered by a series of pivotal insights that moved it from abstract concerns about AI trust to concrete, actionable strategies. Julian Gorman’s global‑collaboration framing and the four‑pillar model broadened the scope beyond national borders, while Dr. Upadhyay’s real‑world AI successes grounded the conversation in measurable impact. Mathan Babu’s exposition of data‑silo challenges and the shift toward unified LLM platforms highlighted operational hurdles, prompting deeper analysis of cost and infrastructure. Abbas’s proposal of a voluntary incident‑reporting schema introduced a governance mechanism that linked technical practice with regulatory oversight. Together, these comments created a narrative arc: establishing ethical foundations, showcasing tangible benefits, diagnosing implementation bottlenecks, and proposing systemic solutions. This progression shaped a nuanced, forward‑looking dialogue on building customer trust through responsible AI in telecom.

Follow-up Questions
What are the concrete benefits and incentives for telecom service providers to voluntarily adopt the AI incident reporting standard?
Understanding the value proposition will encourage uptake and help regulators shape supportive policies.
Speaker: Dr. M P Tangirala
How can telecom enterprises reduce the high infrastructure costs (e.g., GPUs, power) associated with AI deployments while maintaining performance?
Infrastructure accounts for 80‑90% of AI spend; cost‑optimization research is needed to make AI scalable for operators.
Speaker: Mathan Babu Kasilingam
What methods can be used to deduplicate and consolidate siloed AI data repositories across different functional domains within a telecom operator?
Multiple data silos hinder security, governance, and efficiency; a unified data platform could improve AI outcomes.
Speaker: Mathan Babu Kasilingam
What privacy‑enhanced data‑sharing frameworks and regulatory sandboxes are needed to enable cross‑industry and cross‑border scam detection without breaching personal data laws?
Effective scam mitigation requires sharing risk‑related data; research is needed on technical and legal mechanisms that protect privacy.
Speaker: Julian Gorman
How effective and scalable are AI‑driven disaster‑management and cell‑broadcast systems when deployed in other countries, and what adaptations are required?
The Indian solution has shown success; studying its transferability will inform global early‑warning initiatives.
Speaker: Dr. Rajkumar Upadhyay
What is the impact of AI‑based fraud‑prevention tools (e.g., Fraud Pro, Sanchar Sati) on false‑positive rates and customer inconvenience, and how can these be minimized?
Balancing fraud reduction with user experience is critical; empirical evaluation is needed.
Speaker: Dr. M P Tangirala (implied)
What governance models and human‑in‑the‑loop mechanisms are required to ensure AI decisions in telecom operations remain trustworthy and error‑free?
Human oversight is essential to prevent autonomous AI errors; standards and processes need development.
Speaker: Dr. M P Tangirala (implied)
What concrete cross‑border collaboration frameworks should global leaders and India adopt to align anti‑scam efforts worldwide?
Coordinated international action can raise the cost of scams globally; policy and operational steps must be defined.
Speaker: Anil Kumar Jha
How should home‑grown large language models (LLMs) be integrated into telecom services, and what governance, bias mitigation, and performance monitoring practices are required?
Three Indian LLMs are forthcoming; research is needed on safe deployment within telecom ecosystems.
Speaker: Mathan Babu Kasilingam
How can AI be leveraged to defend against AI‑driven cyber‑attacks, and what defensive AI architectures are most effective for telecom critical infrastructure?
As attackers adopt AI, defenders must also use AI; systematic study of defensive AI techniques is essential.
Speaker: Mathan Babu Kasilingam

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI for agriculture Scaling Intelegence for food and climate resiliance

AI for agriculture Scaling Intelegence for food and climate resiliance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session titled “Using AI for Food and Climate Resilience” brought together senior officials from the Maharashtra government, the Ministry of Agriculture, the World Bank and research foundations to discuss how artificial intelligence can strengthen India’s agricultural system ([1-8]). Speakers highlighted that climate change is making farming riskier while digital tools and AI are advancing rapidly, offering an opportunity to secure food, nutrition and farmer incomes ([9-12]).


Under Chief Minister Devendra Fadnavis, Maharashtra has launched the Maha Agri AI Policy 2025-2029, which embeds AI in pharma advisory, market information, data exchange and traceability, moving from pilots to full-scale projects ([20-27]). The state’s AI-powered platform Mahavistar now serves more than 2.5 million farmers in Marathi and the tribal language Bili, delivering personalized advisories, pest alerts and access to government schemes ([23-25]). A federated, consent-driven architecture called Maha AgEx is being built to bring diverse datasets together, creating a “big picture” for predictive governance such as early-warning alerts for cotton growers ([26-27][65-66]).


Chief Minister Fadnavis stressed that AI must rest on trusted data, ethical governance and public accountability, and that Maharashtra’s policy aims to scale AI beyond demonstrations to reach millions ([53-58][59-60]). Dr Devesh Chaturvedi outlined the national digital public infrastructure, noting that close to 9 crore farmer IDs have been created and that the new Bharatvistar app integrates weather, crop, market and scheme information on a single platform, soon to be available in all regional languages ([138-149]). He also described successful monsoon-prediction models built on a century of IMD data that helped thousands of farmers adjust sowing and irrigation decisions ([155-157]).


Dr Soumya Swaminathan warned that women farmers often lack land titles and digital footprints, so AI systems must deliberately incorporate women’s data and reduce drudgery, with evaluation mechanisms to avoid bias ([227-236][240-247]). Johannes Jett of the World Bank emphasized the need for government-led AI governance, affordable connectivity, and private-sector creativity, and offered the Bank’s financing and truth-testing capacity to support responsible AI roll-outs ([184-190][204-206]). Shankar Maruwada reinforced that open, interoperable standards-similar to those used in India’s Digital Public Infrastructure and railway networks-are essential for scaling AI solutions and enabling “shared rails” for rapid diffusion across states and countries ([305-311]).


The panel concluded that India is positioned to lead South-South AI knowledge exchange, with upcoming AI for Agri 2026 conferences in Mumbai serving as platforms for deeper collaboration and investment ([78-86][211-218][321-326]). Overall, participants agreed that moving from pilots to interoperable, trustworthy AI platforms, while ensuring gender inclusion and robust governance, is critical for achieving climate-resilient, food-secure agriculture at population scale ([41-48][68-71][224-236]).


Keypoints


Major discussion points


Scaling AI from pilots to statewide platforms – The speakers highlighted that India is at a “turning point” for agriculture and that digital tools and AI are advancing rapidly, prompting the launch of the Maha Agri AI Policy 2025-2029 and the AI-powered Mahavistar service, now used by over 2.5 million farmers in Marathi and tribal languages[9-13][20-27]. The Chief Minister’s address reinforced the shift from pilots to full-scale projects, citing Mahavistar’s multilingual advisories, market intelligence and early-warning pest surveillance as examples of “predictive governance”[61-66]. The Ministry’s later rollout of Bharatvistar, a national AI-based platform that consolidates weather, crop, pest and market data, further illustrates the move toward population-scale AI deployment[123-130][155-158].


Responsible, transparent and trustworthy AI governance – Both the state and central leaders stressed that AI must be built on “trusted data, ethical governance, and public accountability” and that without trust “scale will not happen”[53-56]. A statewide interoperable agriculture data exchange based on open standards and strong data-governance was presented as a cornerstone for traceability and transparency[68-71]. The four pillars for AI-for-Agri-2026-transparency, open infrastructure, innovation beyond silos, and inclusive investment-were laid out to ensure responsible deployment[79-84]. The World Bank representative added that governments must guarantee AI’s credibility, accessibility and affordability, especially for low-literacy farmers, while the private sector should be encouraged to innovate within a regulated, trustworthy framework[184-190][191-196].


Ensuring gender equity and inclusion of smallholders – A recurring theme was that women farmers must be placed at the centre of AI design. The Chief Minister’s agenda highlighted “inclusion and gender equity” as a mantra[83-86], and Dr Soumya Swaminathan explained that women often lack land titles and therefore risk being excluded from data-driven services; she called for early incorporation of women’s data, reduction of drudgery, bias testing, and continuous feedback loops that involve women directly in advisory committees[224-236][240-247]. The panel also noted that Mahavistar’s feedback mechanisms and collaborative work with the Swaminathan Research Foundation aim to embed women’s rights and nutritional security into AI solutions[257-262].


Multi-level and cross-border collaboration – The panel’s purpose was to move “from vision to implementation” by institutionalising AI across central-state interfaces, global institutions, industry and academia[112-119]. Maharashtra’s “global call for AI use cases” and the resulting international compendium of successful deployments illustrate a proactive knowledge-sharing strategy[74-78]. The World Bank described its role in financing, “truth-testing” AI applications and fostering South-South exchange[211-218]. Shankar Maruwada emphasized open-source, interoperable standards (e.g., Beacon, Sunbird) and the creation of “diffusion pathways” that allow any state or private player to plug into a shared AI rail network, envisioning 100 such pathways worldwide by 2030[304-311][315-317].


Linking AI to climate resilience, food and nutritional security – The opening remarks framed climate volatility, falling water tables and deteriorating soil health as threats to food systems, positioning AI as a key lever for “food and nutrition security, higher farmer incomes and a stable economy”[9-12][42-47]. The historical analogy to the Haber-Bosch breakthrough and the “pulling intelligence from the earth” narrative underscored AI’s potential to create a new agricultural miracle that mitigates climate risks and sustains livelihoods[272-279][311-314].


Overall purpose / goal of the discussion


The session was convened to translate high-level policy commitments into concrete, scalable AI solutions for Indian agriculture, while simultaneously establishing a governance framework that guarantees trust, inclusivity (especially for women and smallholders), and interoperable public infrastructure. It sought to galvanise collaboration among central and state governments, international development partners, the private sector, research institutions and farmer organisations, and to set the agenda for upcoming events such as the AI for Agri 2026 Global Conference.


Overall tone and its evolution


– The conversation began with a formal, optimistic tone, celebrating policy milestones and the launch of Mahavistar[9-13][20-27].


– It then shifted to a technical and cautionary tone, emphasizing the need for trustworthy data, ethical governance, and the challenges of scaling AI responsibly[53-56][68-71].


– Mid-discussion the tone became inclusive and advocacy-driven, focusing on gender equity, farmer participation and the importance of human-in-the-loop checks[83-86][224-236].


– Towards the end, the tone turned collaborative and visionary, highlighting global partnerships, open-source standards, and an aspirational “100 diffusion pathways” future[74-78][304-311].


– The closing remarks were inspirational and hopeful, likening AI’s potential to historic agricultural breakthroughs and urging collective action[311-314][315-317].


Overall, the dialogue moved from celebration of achievements, through sober reflection on risks and responsibilities, to a forward-looking, partnership-focused call to action.


Speakers

Dr. Devesh Chaturvedi


Area of Expertise: Agricultural policy, digital agriculture, AI integration in farming


Role / Title: Secretary, Ministry of Agriculture and Farmers Welfare; leads national effort in agriculture and farmer welfare [S1]


Vikas Chandra Rastogi


Area of Expertise: Agricultural administration, AI policy, public sector leadership


Role / Title: Secretary, Ministry of Agriculture and Farmers Welfare, Government of Maharashtra; moderator/host of the session [S4][S5]


Dr. Soumya Swaminathan


Area of Expertise: Agricultural research, science-based policy, women’s empowerment in agriculture


Role / Title: Chairperson, Dr. M.S. Swaminathan Research Foundation [S6][S7]


Shankar Maruwada


Area of Expertise: Digital public infrastructure, open-source platforms, AI ecosystems for agriculture


Role / Title: Co-Founder and CEO, Agestep Foundation; involved with ECSTEP and DPI initiatives [S8][S9][S10]


Johannes Zutt


Area of Expertise: International development, financing AI solutions for agriculture


Role / Title: Regional Vice President, World Bank [S11][S12][S13]


Devendra Fadnavis


Area of Expertise: State-level governance, agricultural innovation, AI policy implementation


Role / Title: Honorable Chief Minister of Maharashtra [S14][S15][S16]


Additional speakers:


Ramesh Chaturvedi – Introduced as Secretary, Ministry of Agriculture and Farmers Welfare (appears in the opening remarks).


David Rupadnavi – Referred to as an Honorable Chief Minister of Maharashtra in the introductory segment.


Jonas Jett – Mentioned as Regional Vice President, World Bank (same role as Johannes Zutt).


Full session reportComprehensive analysis and detailed insights

The session “Using AI for Food and Climate Resilience” opened with senior officials from the Maharashtra government, the Ministry of Agriculture and Farmers’ Welfare, the World Bank and research foundations welcoming the audience and framing agriculture as a sector at a critical turning point. Climate change is already making farming riskier – water tables are falling, soil health is deteriorating, supply chains are fragile and markets volatile – yet rapid advances in digital tools and artificial intelligence (AI) present a unique opportunity to secure food and nutrition, raise farmer incomes and stabilise the economy[9-12][41-48]. The gathering was convened to move “from vision to implementation” by institutionalising AI within India’s agricultural systems at scale[112-119].


Maharashtra’s response is embodied in the Maha Agri AI Policy 2025-2029, which places AI at the core of advisory services, market information, data exchange, product traceability, research, capacity-building and pharma advisory services[20-27]. The state’s flagship AI-powered platform Mahavistar now reaches more than 2.5 million farmers, delivering personalised advisories in Marathi and the tribal language Bili, pest alerts and seamless access to government schemes[23-25][61-66]. A parallel national platform, Bharatvistar, integrates weather, crop, pest, market and scheme information on a single app (currently English and Hindi, with all regional languages to be added via the Bhashini language-technology initiative within three to six months)[138-149]. The Chief Minister also highlighted AgriStrike as a state-led platform that operationalises AI beyond advisory services, giving farmers seamless access to various schemes and services[53-60], and announced a publicly available traceability DPI blueprint hosted at www.fema.gov (the exact URL was given in the speech)[53-60].


Chief Minister Devendra Fadnavis stressed that AI must rest on trusted data, ethical governance, transparency, auditability and public accountability, and announced the rollout of the open, consent-driven Maha AgEx data-exchange to enable traceability and end-to-end visibility across value chains[53-58][59-60][68-71]. He called for rapid scaling from pilots to full-scale projects that serve millions of farmers, invited venture-capital funds, impact investors, multilateral development banks and philanthropic foundations to partner in the state’s agri-tech ecosystem[87-89], and highlighted a global AI-use-case compendium released on 17 February 2026[74-78].


In the panel, Dr Devesh Chaturvedi outlined the national digital public infrastructure: close to nine crore (≈90 million) unique farmer IDs have been created, each linking land, crop, soil-health and scheme eligibility data, thereby eliminating fragmented applications[140-146]. He described how these IDs are being integrated with Mahavistar and Bharatvistar to deliver consent-based, hyper-local advisories, and noted early-stage predictive models built on a century of India Meteorological Department data that have already improved monsoon-related sowing decisions[155-157].


Johannes Jutt (World Bank) stressed the need for government-led AI governance, affordable connectivity and digital literacy, while highlighting the private sector’s capacity to create niche applications and the Bank’s role in financing and “truth-testing” AI outputs[184-190][194-200].


Dr Soumya Swaminathan warned that most women farmers lack land titles, risking exclusion from data-driven services, and called for AI solutions that reduce drudgery, embed women’s feedback and retain a “human-in-the-loop.” She cited the Swaminathan Research Foundation’s Women Connect app for fisher-women as an example of gender-focused digital tools[241-245] and emphasized the need to incorporate women’s land-ownership information early in AI datasets[227-230]. Mahavistar’s voice-based, multilingual interface – built to work on feature phones for illiterate users – directly addresses these barriers[304-311].


Shankar Maruwada linked India’s historic Haber-Bosch breakthrough and railway “digital rails” analogy to today’s open-protocol (e.g., Beacon) DPI, arguing for interoperable, modular AI ecosystems that can be replicated across states and countries. He announced a goal of establishing 100 diffusion pathways worldwide by 2030, each delivering safe, population-scale impact[304-311][315-317].


The panel concluded with an invitation to the AI for Agri 2026 conference in Mumbai (22-23 February) and a reminder that the next session would begin shortly. Participants called on central and state governments, international agencies, investors, academia and farmer organisations to collaborate in turning AI from a promising technology into a trusted, inclusive public good that can deliver climate-resilient, food-secure agriculture at population scale[101-103][321-326].


Session transcriptComplete transcript of the session
Vikas Chandra Rastogi

Mr. Ramesh Chaturvedi, Secretary of Ministry of Agriculture and Farmers Welfare. Sir, please come onto the stage. Our Honourable Chief Minister, Mr. David Rupadnavi is here. Good morning, sir, and welcome. May I also invite Mr. Johannes Jutt, Regional Vice President, World Bank, onto the stage, please. Honourable Chief Minister of Maharashtra, Mr. Devendra Fadnavis, Honourable Minister. Shri Ashish Elarji, Shri Nitesh Raneji, our distinguished guests from India and around the world. Very good morning. On behalf of the government of Maharashtra, I welcome you to the session on Using AI for Food and Climate Resilience. Agriculture is at a turning point. Climate change is making farming riskier, resources are limited and markets are changing quickly. However, there is an opportunity.

Digital tools and AI are advancing fast. Our goal is not just to use AI tools. We must build intelligence into our public systems to help everyone. For India, the change is essential. It is the key to food and nutrition security, higher farmer incomes and a stable economy. India is a country with a strong economy. India has shown that digital systems work when they are open and well -governed. Our next step is to bring AI into this framework in a responsible way. Under the leadership of Honorable Chief Minister of Maharashtra, the state has launched the Maha Agri AI Policy 2025 -2029. This policy uses AI for pharma advisory services, market information, data exchange, product traceability, innovation and research, and creating capacities of stakeholders.

We are moving beyond pilots to projects at full scale. Mahavistar is the country’s first AI -powered network and information and advisory services. Today, Mahavistar is being used by more than 2 .5 million farmers to get advisories in Marathi language, and recently the first tribal language in the country, Bili, has also been integrated into Mahavistar. AgriStrike is helping to bring AI into the market. It is helping farmers to get seamless access to various schemes and services. the Maha AgEx which is an open federated and consent driven architecture for data exchange it is helping us to bring diverse data sets together to get us a big picture. Agriculture is now a key part of India AI mission. We are proud to work with the government of India to lead this change.

I want to thank the Ministry of Electronics and Information Technology, Ministry of Agriculture Extra Foundation, the World Bank, MS Swaminathan Research Foundation, the Gates Foundation and all our partners for their support. It is now my duty to invite our Honourable Chief Minister to the stage. He will share his vision for using AI to strengthen our food systems and protect our climate. After the address of Honourable Chief Minister, we have a panel discussion with our distinguished panelists. Welcome.

Devendra Fadnavis

A very good morning to all of you. Shri Devesh Chaturvedi ji, Rajesh Agarwal ji, Vikas Rastogi ji. Mr. Jonas Jett, Srimati Swaminathan, Shushankar Maruwada, my colleagues, Shriashi Shailar ji, Nitesh Rane ji. All the dignitaries present here, namaskar and good morning to everyone. It is my privilege to address this distinguished gathering at the India AI Impact Summit. And this important session. On AI in agriculture. We meet at a very defining moment across the world. Food systems are under strain. Climate volatility is intensifying. Water tables are falling. Soil health is deteriorating. Supply chains are fragile. And global markets are unpredictable. For countries from the global south, agriculture is not merely an economic sector. It is livelihood, social stability and national security.

India understands this very deeply. And under the visionary leadership of our Honorable Prime Minister Narendra Modi, India has placed digital public infrastructure and responsible infrastructure at the center stage of national development. The India AI mission is about using technology to deliver inclusion, transparency and scale Today, agriculture must sit at the heart of this mission Over half a billion Indians depend directly or indirectly on agriculture Yet, smallholders face fragmented information, rising input costs, climate uncertainty and limited access to credit and markets Traditional extension systems, however committed, cannot match the scale and the speed required Artificial intelligence changes this equation AI can provide hyperlocalization It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture It can be used to predict and predict the future of agriculture credit scoring based on crop intelligence, transparent traceable supply chains, real -time market advisories.

But let me emphasize, AI is not a magic. As Honorable PM said in his inaugural session, AI must be built on trusted data, ethical governance, and public accountability. Without trust, scale will not happen. Last year, Maharashtra made a very clear and decisive strategic decision. AI in agriculture must not remain confined to demonstrations or pilots. It must reach millions. Under our Maha Agri AI policy 2025 -2029, we adopted Maha Agri AI policy 2025 -2029, we adopted a policy -led, ecosystem -driven model. built on openness and interoperability. Allow me to share what this has meant in practice. As rightly told by our Secretary Mahavistar, our AI -powered mobile platform delivers multilingual personalized advisories, market intelligence, pest alerts, and access to government services more than 2 .5 million downloads, acting as a digital friend to all these farmers.

This demonstrates one thing very clearly. Farmers are ready for AI when AI is designed for them. AI -based pest surveillance, crop sap integration is our mantra. By integrating, geospatial analytics, With post -surveillance, we have delivered early warnings to cotton -growing farmers, reducing crop vulnerability and finance risk. This is predictive governance in action. Agriculture data exchange is also one thing which is defining this step. We are building a statewide interoperable agriculture data exchange based on open standards and strong data governance. Data must empower farmers, not exploit them. Traceability digital public infrastructure in today’s global markets, the transparency is a mantra. We are unveiling a blueprint. For more information, visit www .fema .gov. a traceability DPI that will ensure end -to -end visibility across value chains enhancing food safety, export competitiveness and consumer trust and this is not proprietary.

It is being designed as a replicable public infrastructure model for India and the entire global south. In partnership with India AI Mission the government of Maharashtra, the World Bank and the Wadhwani AI, we launched a global call for AI use cases in agriculture. The resulting compendium of real world AI applications in agriculture was released in Delhi on 17th February 2026. This compendium documents successful AI deployments from Africa, Asia, Latin America and beyond. India is convening global knowledge for the benefit of the global south. As we move towards AI for Agri -2026 in Mumbai, our vision rests on four pillars. AI must be transparent, auditable and explainable. Open and interoperable digital infrastructure. Innovation cannot scale in silos.

Investment and scaling. Technology without capital remains just a theory. And inclusion and gender equity is also a mantra. Agri -2026. Is the international year of women in agriculture. AI solutions must be designed. with women farmers, not merely for them. Maharashtra today presents one of the most compelling agri -innovation ecosystems globally. 150 lakh hectares of cultivated land, diverse agro -climatic conditions, leading agriculture universities and AI research centres, a vibrant start -up ecosystem, a clear regulatory framework, and single -window facilitation for investors. We invite venture capital funds, impact investors, multilateral development banks, corporate innovation arms, and philanthropic foundations to partner with us. And in this partnership, we initiate a global partnership between Maharashtra and the United States to develop and leverage the technology to create a future for all.

Maharashtra is a partner of the International Development Fund. Maharashtra is a partner of the International Development Fund. Maharashtra is a partner of the International Development Fund. Maharashtra is a partner of the International Development Fund. Maharashtra is a partner of the International Development Fund. Maharashtra is a partner of the International Development Fund. Maharashtra is a partner of the International Development Fund. co -developing traceability DPI modules, investing in agri -tech startups, supporting digital literacy, especially among women farmers, building capacity in the rural AI ecosystems. When you invest in Maharashtra, you invest in scalable solutions for engaging economies worldwide, food security, climate resilience and AI governance are deeply connected. Countries that master AI -enabled agriculture will secure farmer incomes and strategic stability.

India has the scale, DPI and democratic governance model to demonstrate how AI can be deployed responsibly at population scale. Maharashtra is proud to be laboratory of that ambition. Friends, this satellite session is a declaration. We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution, from intention to investment. The government of Maharashtra stands ready to collaborate with the government of India, with states, with global institutions, investors, researchers and farmer organizations. Let us ensure that AI becomes a force for

Vikas Chandra Rastogi

Thank you. Thank you, sir, for your visionary address. You always continue to inspire us to aim higher and achieve better. And under your leadership, I can assure you the Agriculture Department will rise to the challenge and serve the aspirations of more than 15 million farmers of the state of Maharashtra. Thank you so much, sir. We will now start the panel discussion in a few moments. Thank you. Thank you. Thank you. Thank you. Once again. Dr. Devesh Chaturvedi, he is the Secretary, Ministry of Agriculture and Farmer Welfare Dr. Chaturvedi leads our national effort in agriculture and farmers welfare Mr. Johannes Jett, he is the Regional Vice President, World Bank Mr. Jett brings a vital global perspective on development and finance from the World Bank Ms.

Soumya Swaminathan, she is the Chairperson of Dr. M. S. Swaminathan Research Foundation Dr. Swaminathan is a global leader in science, a champion for sustainable research and a strong advocate for mainstreaming women farmers’ role in agriculture Mr. Shankar Maruwala, he is the Co -Founder and CEO of Agestep Foundation He is a pioneer in building digital public infrastructure that empowers women farmers to develop their own agriculture and empowers people at scale and I am very proud to say that the Government of Maharashtra and Agestep Foundation together have brought out Mahavistar, which more than 2 .5 million farmers are using today to get the advisories and information that they need on a daily basis. The objective of this panel discussion is to move from vision to implementation.

Specifically, we will deliberate on how to institutionalize AI within agriculture systems at scale, how to ensure inclusion, especially of women farmers and smallholders, how to build interoperable, trustworthy and sustainable AI governance ecosystems, and how to strengthen collaboration between the center, states, global institutions, industry and academia. The session is also an important precursor to AI for Agri 2026 Global Conference, where we will continue these deliberations in greater operational depth with governments, investors, investors. innovators and development partners. AI for Agri conference is being held in Mumbai on 22nd and 23rd of February at Jio World Convention Centre. With this context, let’s begin our discussion. My first question is to Dr. Devesh Chaturvedi. Sir, under your leadership, the ministry has taken significant steps in advancing the digital agriculture mission and operationalizing the Agri -STEC framework.

You are laying a strong digital foundation for the sector. As we now look at integrating AI more systematically into agriculture, how do you envision the central state collaboration framework, specifically to ensure that AI deployments are aligned with national architecture while allowing states the flexibility to innovate based on local agro -climatic and socio -economic context? And finally, how can we institutionalize this collaboration? to achieve population scale impact while mentoring interoperability and data trust. Thank you.

Dr. Devesh Chaturvedi

A lot of questions in the same question. So what I’ll do is I’ll just first take you through the initiatives. First of all, we deeply appreciate the leadership taken by Maharashtra under obviously the leadership of a vulnerable chief minister and with the agriculture department. They have done exceptional work in digital agriculture mission by developing farmer IDs and digital crop survey. And also they launched Mahavistar as a precursor of Bharatvistar. And recently on 17th, government of India have also launched one of the first integrated AI -based system for the farmers, which is Bharatvistar, which presently is undertaking, providing services, which is work through the app. Android based app as well as through mobile telephony on weather advisories ICR based crop advisories, pest advisories market information regarding various agriculture produced, traded in the Mondays and lastly the government schemes of government of India.

Now why is this important, AI is important in agriculture? Like we did a lot of, we started with digitalization of services, different services we had DBT we had online systems of applying for various common person, applying to the common service centers or through the mobile apps but what was felt was that while we had initiated this process to ensure that the bureaucratic red tapism is removed, what we were moving towards was a sort of digital red tapism because within our ministry different schemes had different apps and they had different ways of selection and within the state also horticulture had a different database of farmers, agriculture had a different database, animal husbandry has a different database, crop insurance has a different database.

So basically a farmer who has to avail so many services was, we felt that he or she was getting lost in which app to use and which one to use. And sometimes it becomes more difficult to avail the services through online systems or to get advisories than to go to a person and say, okay, tell me how to do it. So the whole idea was that once we have this AI -based system, we have a same platform for different applications and different advisories at a click of the button or maybe just as a voice. So that is the whole idea of shifting towards AI -based solutions. So now what we have initially in the first phase in the artificial intelligence system, the Bharat Vistar or the Mahavistar of Maharashtra, is that the advisories, the crop advisories, the weather advisories, schemes information about how to apply and what is the status of that application, and also the Monday rates.

All these have been put in the one platform. You can just make a – presently it is working in English and Hindi, but in the next three to six months we’ll be taking it towards all the Bhashani -related languages. And the next step is, as we mentioned, that the states are working together with us for the digital public infrastructure. So close to 9 crore farmer IDs have been developed. So what is a farmer ID? And you must have read the statement of Honourable Finance Minister, that DPI is the new UPI. So what is the basic – this agri -stack, which is a part of DPI, is that for agriculture is that we have – each farmer has a unique farmer ID with the back end of all the crops the person has sown, what is the land available to that person, all the data with the share of the land and the crops sown and the soil health card details if the soil health has been given.

So with these basic details available on the system, it empowers the farmer through that ID to avail services because it is already approved by the relevant authorities in the government. So the person does not have to or the authorities who are giving the services are not required to cross -verify the credentials of the farmer based on the record of rights or maybe the Girdhavari or whatever it was in the different states. So every state and Maharashtra is one of the leading states here. We are working together to have a saturation of farmer IDs and crop survey. And once this is there, then this AI will further transform into a very, very tailored advisory. So a person calls or gives the farmer ID or Aadhaar.

And at the back end, we will, based on the consent, access the details of where the farmer is from, what is the crop being grown, what is soil health conditions. And very targeted advisories will be given, which will be made operational in the next three to six months. So instead of pushing data which may not be of interest of the farmers, very specific, tailored, data for that farmer will be available based on integration. of digital public infrastructure with Bharat Vistar. And the third aspect will come when we do the predictive models. And we tried that and you must have remembered in the inaugural session when Google CEO mentioned about that predictive model which we did with about 3 .8 crore farmers.

We used 100 years data of IMD and a model to predict a monsoon for the next one month and for next week. And that prediction was fairly accurate and farmers we got the feedback to farmers did take the decision to sow and to irrigate based on the predictive model which was sent. And now we will expand the predictive models to ensure that we get more advisories of the market situation, of the weather situation which will help improving the decision making of the farmers and so that they can increase their productivity, reduce the cost. So that is the whole idea of AI in agriculture. And we hope that more and more farmers will adopt it and it will be a lot and it will be a lot exactly a replacement but a sort of additional to the human, we can say, extension services, which we find is not able to reach to the farmers because of the resource constraints of each state.

The extension machinery, the KVKs, all our state extension machineries, it’s very difficult to reach each and every farmer because of the fact that we can’t have a person sitting in each village reaching to each farmer. But AI, along with digital public infrastructure, along with the mobile and internet penetration in the various rural areas, will ensure that that gap is removed and we get more and more access to the farmers on services and advisors. That is the whole idea of having center and state interoperability. But I hope I have answered most of the questions which

Vikas Chandra Rastogi

As you rightly mentioned. AI systems are acting like a digital friend of the farmer so they are available at any point of time through multiple channels and in a language they understand in FEDSAR with ministries assistance we were able to get access to multiple images of pest and disease and with IIT Bombay we have been able to develop models where farmers can take a picture and they can find out what pest and disease is it and then ultimately what is to be done based on the knowledge created by agriculture universities and ICR institutions so I think there is a great opportunity for us the national government has the scale and the states have their own specific skill sets and knowledge together if they combine I think we can reach out to each and everybody in the farming sector.

Thank you sir I will move on to Mr. Johannes Jutt now the regional vice president of the World Bank the World Bank has been a long standing partner to both the government of India and the government of Maharashtra we have multiple projects going on concurrently as well as we have had in past as well. And these projects have been aimed at strengthening agriculture systems, climate resilience, and institutional capacity. As we move into the era where AI technologies are evolving at unprecedented speed, how can development partnership adapt to remain agile and responsive? In particular, how can we structure programs and technical assistance model that provide just -in -time support to central and state governments, enabling them to experiment, iterate, and scale AI solutions responsibly?

Johannes Zutt

to be here today. So we’re on the cusp of a major revolution in how support to farmers and agriculture is happening. I actually grew up on a farm. I worked on a farm from the ages 10 to 21. I think every hour I wasn’t in school that I was actually at home. I was working in a farm. In some ways it feels paleolithic because we didn’t have computers. We had telephones that were connected to wires and our ability to get information about what was happening around us was extremely limited. We spent a lot of time trying to find out the things that today you can find out very, very quickly using small AI for agriculture. And that’s truly revolutionarily empowering for farmers.

But to make that work for farmers, there’s a lot of things that need to go right. And I think it’s worth reflecting a little bit about on the different roles that we have. Thank you. actors in the ecosystem have, starting obviously with government. My colleague mentioned a number of these things earlier. The government’s responsibility is principally on foundations, things like the governance of AI, the interoperability, accessibility, obviously ensuring that educational programs include appropriate types of skilling in the use of digital services. This is a big challenge in countries like India, where frankly there are still people who don’t have sufficient literacy to read what comes over a basic smartphone. Ensuring that the research and extension that is provided through these small AI platforms, is credible, is trustworthy, is backed by science.

I think that’s also extremely important. Of course, farmers will find out if they aren’t. but at high expense, right? So we want to make sure that they’re not being advised to do things that are negative for them. And then also looking at the cost of service, the connectivity, what does the farmer actually need to be able to link into these different types of platforms that give information? Because, of course, we’re often also talking about farmers who have very, very few assets and who may be essentially unable to stay permanently connected or even easily connected to the Internet. They’re going to have very basic smartphones, et cetera. So the government has a lot of work to do in all of those areas.

Then you can look at what can the private sector do. Now, one thing that the government needs to do is encourage, crowd in private sector capacity and capital. But once we turn to the private sector, what is the private sector’s principle? advantage. I think that there’s a lot of creativity in the private sector. So the actual applications that are being developed are being developed by individuals in the private sector with a passion for specific sorts of issues that are constraining farmer success. And, you know, that creativity will result in a number of different applications that will be aimed, in most cases, to help farmers overcome certain hurdles that they face. And, you know, we can kind of let a thousand flowers bloom there and see what actually takes root.

And it’s amazing what you start to see. Just yesterday, I was learning about an application in Morocco developed by a tomato farmer who was able to give advice about how much water tomato plants need simply by taking a picture. of the current tomato plant. Take a picture and it tells you how much water you actually need to give this plant, which obviously in a water -stressed environment is vital, vital information. And then, you know, there are roles for institutions like my own, the World Bank Group, which can help to provide some of the financing that helps develop these applications, and also the foundational backbone for artificial intelligence. And we can also play a role at the advisory end, where we are helping to truth -test, if you like, the information that’s coming through different applications that are coming out of the AI sandbox in different contexts to make sure that it’s actually providing information that’s useful to the end beneficiary and enhancing from a productivity perspective at the farm level.

Thanks.

Vikas Chandra Rastogi

I think you have rightly pointed out the role of innovation and research and what we see is we require high quality robust data to actually build upon that and as Honourable Chief Minister mentioned, MahaEGX is one step in that direction wherein we bring diverse data sets and make them accessible to researchers, academic institutions, departments and also start -ups and many of these start -ups we will see they are showcasing their innovations in AI for Agri conference in Mumbai. So we request all of you to please come there and see for themselves what kind of excitement they have and what kind of solutions are envisaged. I have one supplementary question to you. How do you see a platform such as AI Impact Summit as well as AI for Agri global conference contributing to deeper global collaboration and south -south knowledge exchange in this domain?

Johannes Zutt

Thank you for that additional question. I mean, obviously, India is in a great position to lead the development of AI, particularly for developing countries where there are still significant challenges helping poor people to escape poverty permanently. India has demonstrated digital innovation for a long period of time already. It’s got an enormous population with a huge variety. The challenges of bringing farmer -appropriate data to the farmer’s fingertips in India are – I was going to say India is a microcosm of the rest of the world. It’s hardly a microcosm. It’s so huge. But because you have so many languages, so many different regions, so many different types of crops, and the starting conditions at the farm level are so incredibly varied, figuring out how to make AI at the farm level work, in India will automatically have a large number of spillover learnings for other countries around the world.

and because India after China and the United States is the country in the world that is best positioned actually to push all of this work forward and because it is itself a developing country, it’s very, very clear that it will have a central role to play in South -South learning for those reasons.

Vikas Chandra Rastogi

Thank you so much. I move on to Dr. Swaminathan. Dr. Swaminathan, your father, Professor M .S. Swaminathan, played a historic role in shaping India’s agriculture transformation during the Green Revolution, ensuring food security at a critical juncture in our history. Today, as we speak of a new phase of transformation driven by AI, we are again at an inflection point. You have consistently championed science -based policy, sustainability and the empowerment of women farmers. With 2026 being recognized internationally, as the year of women farmers, how can we ensure that AI -led agriculture transformation strengthens women’s agency? knowledge access and climate resilience and what institutional safeguards and design principles must be embed today so that this new technological revolution becomes equitable farmer centric and grounded in scientific integrity

Dr. Soumya Swaminathan

Thank you very much for that question Vikasji not only is this year the international year of women farmer but we know that agriculture itself is increasingly being feminized with many men actually leaving farming to the women and migrating out to the cities for other opportunities so it is really essential to put women at the center of all that we are discussing and I think the chief minister today gave us a wonderful vision of what can be the future provided of course like you said that there are the guardrails there are the institutions there are the safeguards and the design principles that we think about from the very beginning so my father professor MS Parminathan used to say that the green revolution was not only about the seeds, of course the seeds played a very big role you know the high yielding varieties but it was about the entire ecosystem and the institutions that were developed at that time which included the outreach you know later on the Krishi Vigyan Kendras of course were developed but also the access to credit, the water, the fertilizers, the education, the empowerment and ultimately became a success because farmers realized the potential of it and took it on.

So what he used to say is that you know every technology, no technology is pro -poor or pro -rich or pro -woman or against women, it’s how we use that technology so it’s really like you said the inflection point today is how do we use this very powerful technology that’s come to us. So I think there are a few points here, you know, to make sure particularly that women farmers are not left behind. The first important fact is that women in India, the minority of them who have their name on the land document, so mostly it is in the man’s name, and Deveshji was telling me today that this is improving and that the latest census shows that perhaps at least a quarter of the properties are also in the name of women, either jointly or, but that still means that, you know, three -fourths of them don’t have.

And a system that operates basically on publicly available data will then leave out those whose data sets are not available. So I think it would be really important at the early stages itself to think about how women’s data can be incorporated because the algorithms are fed by the data we have. And so all of these advisories may be very suitable for a man who’s operating a tractor on a farm, but not at all relevant for a woman who’s still working with outdated instruments and trying to, you know, till her land. And particularly when we look at more remote areas, tribal areas, where women do a lot of the agriculture like millets, for example. Mostly it is women who grow millets.

And there’s still a lot of mechanization which is absent completely. It is all still very much done using traditional methods and tools, and it involves a lot of drudgery. So I would say that, you know, one of the benchmarks that I would look at is, is it reducing the drudgery and the workload on women farmers? Is AI helping to do that? So I think we also need to look at certain indicators for success. And you mentioned science. I mean, I’m a medical researcher, and the way that we evaluate products is by doing clinical trials, by examining the data and the evidence. And then recommending it for wider use. so again a note of caution would be to as we roll it out we need innovation certainly we also need to do the evaluation looking at inherent biases looking at who’s being excluded looking at are there unanticipated risks or side effects that we didn’t know about but most of all it’s this inclusion I think we don’t want those who are already left behind to be further left out so I think the ongoing research and data collection and feedback loops and most importantly having the voices of those for whom we are developing all these I think in the room I don’t think we have any farmers or women farmers so we are all discussing from what we know but if you are the farmer like you were saying working there and you know the constraints under which you are working so I think the women farmers and farmers in general must have a role they must be part of these committees that evaluate or make recommendations or make suggestions on improvement it has to be an iterative process I think any technology is as good as the application for which it’s developed I’ll give you one example of an app that the MS Farminathan Research Foundation developed for fisher women.

We had very successful app for fisher men called the Fisher Friendly Mobile App that won the UN Tech for Nature award last year. But fisher women were as usual left out and so the Women Connect app actually gives them on a tablet information that they need to sell because once the fishermen have come back from seeds, the women who have to do all of the post harvest and the same is true for crops or fruits or vegetables as well. So that connection to the market, of course information about pests and pathogens and when to buy what and what inputs to use but also being able to organize themselves. And I think women, there are many FPOs now and FPCs and SHGs made of women farmers, empowering them and giving them the knowledge and tools.

And the last thing I would say is we still need humans in the loop. I don’t think we should think that completely making everything run by machines is going to solve our problems. I think it’s risky there. And in a country like India, we also need employment. And so we should think of, and I don’t know how many of you have seen this film called Humans in the Loop. But it’s a tribal woman from Jharkhand who actually raises questions about the algorithm. It’s a very thought -provoking film. So I think Humans in the Loop is going to be important. We have our Krishis, Sakhis and so on. We need to empower them with these. So I think AI and all these digital tools, if they’re used in addition to the traditional knowledge and wisdom that people have and augment it and give them at the right time, at the right place, the knowledge they need, I think we can go a very long way.

Thank you.

Vikas Chandra Rastogi

Thank you, madam. You have rightly pointed out the need to be more sensitive and while developing systems for inclusivity. and to ensure that for whom they are being developed and they are in the loop and they are being consulted. In fact, the feedback mechanism that we have developed in Mahavista takes care of those requirements. I’m also very happy to share that Government of Maharashtra and Dr. M .S. Swaminathan Research Foundation are working together on some of these issues in terms of how to bring women’s right in farming at the center stage. How do we create bio -happiness using our universities and educational systems? And what kind of nutritional security we must look for? Because we have food security, but it’s the nutritional security that we must aspire for.

So we are happy to have support and assistance from MSSRF in that direction. My final question is to Mr. Shankar Maruwada. Mr. Shankar, ECSTEP has played a role. A foundational role in shaping India’s DPI landscape through open source platforms such as Sunbird. which has powered large -scale systems like Diksha, Mahavistar and Open Network Initiative built on backend protocol. These efforts have demonstrated how open standards and interoperable architecture can enable population -scale transformation that we are already seeing today. As we now enter the era of AI -driven public systems, how should we think about standardizing AI -based ecosystem in a similar spirit? How can we bring DPI into AI? And what architecture and governance principles are required to ensure interoperability, trust and sustainability in AI deployments across sectors such as agriculture?

Shankar Maruwada

Again, a whole lot of questions, but let me make my best attempt to answer those. More than 100 years ago, the world faced what was known as a Malthusian crisis, where Malthus, the economist, predicted that if we continue to grow, in the same way we’ll run out of land, we’ll run out of soil. We were a billion and a half then. We are 8 billion. Most of us may not even have heard of the Malthusian crisis. What happened? Someone called Haber and someone called Bosch created a miracle. Haber synthesized ammonia using high pressure and temperature and Bosch put it into an industrial process. That phenomena is now historically known as pulling bread out of air. It took a lot of effort and as Soumya said, creation of a massive ecosystem.

Germany, which pioneered this, lost that race to US. Because US did a better job of diffusing the technology safely to the farmers. They created the discipline of agriculture engineering. They created institutions like the Fertilizer Development Center. They helped. technology demonstrations to farmers to show them how synthetic ammonia could be used. By the way, 50 % of the nitrogen in our body comes from synthetic nitrate ammonia. That’s a fact. So we owe a lot to Haber and Bosch. China then took it on in 80s by buying 10 big plants from Kellogg, training 300 million farmers, showing them how to use synthetic fertilizers. And they went on to be the global leaders in agriculture. India is at a point where if we learn the lessons from such past things, our green revolution, our DPI experience, we are at a pivotal point where the equivalent of pulling bread out of thin air is pulling intelligence from the earth and providing it to the farmer.

this is again not science fiction Mahavistar, the pioneer along with Bharatvistar have taken the first steps to this so when a Mahavistar was designed to build off what Swami has said, it was designed with inclusion in mind inclusion diversity was not an afterthought because to solve for not just Maharashtra’s problems for India’s scale and diversity we need to think of the last person, the most discriminated in the remotest part of India and design systems that work for them we call that DPI now let me give you a specific example of this in Bharatvistar right from the beginning the design specs was we need an illiterate farmer to build off John’s point about digital literacy with a feature phone, not a smartphone, to be able to talk in his or her native language and native dialect.

Marathi itself has many dialects, right? Talk on the phone, like the way she is comfortable talking to another person. Ask the question, have a conversation, get a bunch of answers. That process took us the better part of nine months. Why? Because it’s not just AI. It’s data. It’s processes. It’s training the farm extension workers. It is having trust on will this work? What about the costing? Will I blow up my entire stage budget on a model, right? Do I have autonomy? Can I switch models out, in and out? These are very, very difficult questions. It took us in partnership with a whole lot of people. I mean, government of Maharashtra led the effort, but IndiAI mission, Bhashini, IIT Madras, IIIT Hyderabad, World Bank, Google many other providers everybody chipped in the little part of the solution now here is the best part because we all collaboratively invested in figuring out a solution there that solution could be deployed in Bharat Vistar with more confidence easily again the same challenges that secretary Chaturvedi talked about do we have the data he used a very nice phrase digital red tapism our data is in different formats what matters is the intent of the government of India which triggered the process which allowed Bharat Vistar to be launched the day before it’s a start data will get better, the systems will get better, usage will improve that will generate more data and then over time years the ecosystem will be built this we know from our experience what makes this happen what is that secret sauce the design principles it is the same as DPI what worked for DPI we are taking those same principles one open interoperable systems think networks and not just portals and platforms and siloed and fragmented systems what’s the best example of this the railways in India we have such a vast landscape but the rails are common every state can decide what it wants to move private public defense farming the Indian railways is just providing a backbone that allows everyone to do this there was a time when we had different rail gauges right now that sounds so silly but there was a time like that But India is showing that we don’t have to repeat those early mistakes in digital also.

By creating interoperable networks based on open protocols like Beacon, by collaborating with each other, one of us is bringing in data, somebody is bringing in technology, somebody is bringing in policy, somebody is bringing in research. These collaborative open networks and with the launch of Bharat Vistar puts India in a very unique and responsible position. Unique because we have these open rails, we have the experience of DPI. Responsible because it is a start. Unlike the technologies of the past where you perfect the technology and then deploy AI, you deploy something minimum to start and then evolution, models get better, data gets better, usage gets better. And then it gets better and better over time. that is the unique junction we are in in India what will that mean?

when I -CAR plugs into this network with its weather and pricing data that network makes it available to any state that wishes to turn on the supply from I -CAR when a private sector comes out with a very innovative app let’s say the tomato example that John talked about any state can say I like that I think I will have that made available to my farmers for the farmers they anyway trust the state they can go to the same app and now see this also there if the tomato app person wants they can go directly to each farmer very very expensive so Shared Rails allows us to spread innovation diffuse it very quickly through society keeping in mind both inclusion and inclusion and inclusion and rewarding innovation because innovation has to be rewarded.

And I want to end with a very simple analogy. When Edmund Hillary climbed Mount Everest, he made a lot of people believe it is possible. When Mahavistar was launched, it made the country believe that it is possible to make AI serve the farmer. And to that extent, the responsibility that Mahavistar, Maharashtra government and government of India has is to create these pathways for the rest of the country for the other states. At XTEP Foundation, Nandan Elekani, we made a declaration two days ago. We would like to see a world by 2030 where there are 100 such diffusion pathways, each created by a different set of people in different sectors, in different countries and continents, but each inspiring.

different AI pathways to safe impact at scale. And it’s a very exciting vision. It’s a very collaborative vision. If you all get together, we can also create miracles in our own lifetime. Thank you.

Vikas Chandra Rastogi

With that profound thought, we’ll conclude today’s panel discussion. I thank all the panelists. They have really opened a new vision in front of all of us. And we’ll invite all of you to AI for Agree conference in Mumbai on 22nd. Thank you so much. We don’t have question actually. Time for question. The next session is about to start.

Related ResourcesKnowledge base sources related to the discussion topics (36)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The session “Using AI for Food and Climate Resilience” opened with senior officials from the Maharashtra government, the Ministry of Agriculture and Farmers’ Welfare, and the World Bank.”

The knowledge base lists the Chief Minister of Maharashtra, the Secretary of the Ministry of Agriculture and Farmers’ Welfare (Dr. Devesh Chaturvedi), and a World Bank representative among the speakers, confirming their presence at the discussion.

Confirmedhigh

“Chief Minister Devendra Fadnavis participated in the opening of the session.”

Speaker lists in the transcript show “Honourable Chief Minister of Maharashtra, Shri Devendra Farnavis Ji” taking the stage, confirming his participation.

Confirmedhigh

“Dr Devesh Chaturvedi, Secretary, Ministry of Agriculture and Farmers’ Welfare, was a speaker in the panel.”

The transcript explicitly introduces Dr. Devesh Chaturvedi as Secretary, Ministry of Agriculture and Farmers’ Welfare, confirming his role in the discussion.

Additional Contextmedium

“The discussion was part of the India AI Impact Summit.”

The knowledge base describes the session as being part of the India AI Impact Summit, providing additional context about the broader event in which the panel took place.

External Sources (87)
S1
AI Meets Agriculture Building Food Security and Climate Resilien — May I invite Dr. Devish Chaturvedi, Secretary, Ministry of Agriculture and Farmers’ Welfare. Sir, please come onto the s…
S2
AI for agriculture Scaling Intelegence for food and climate resiliance — Mr. Ramesh Chaturvedi, Secretary of Ministry of Agriculture and Farmers Welfare. Sir, please come onto the stage. Our Ho…
S3
https://dig.watch/event/india-ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — May I invite Dr. Devish Chaturvedi, Secretary, Ministry of Agriculture and Farmers’ Welfare. Sir, please come onto the s…
S4
AI for agriculture Scaling Intelegence for food and climate resiliance — -Vikas Chandra Rastogi: Secretary of Ministry of Agriculture and Farmers Welfare, Government of Maharashtra – leads the …
S5
AI Meets Agriculture Building Food Security and Climate Resilien — -Vikas Chandra Rastogi- Secretary, Ministry of Agriculture and Farmers’ Welfare, Government of Maharashtra (moderator/ho…
S6
AI Meets Agriculture Building Food Security and Climate Resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S7
AI Meets Agriculture Building Food Security and Climate Resilien — Thank you, sir, for your visionary address. You always continue to inspire us to aim higher and achieve better. And unde…
S8
https://dig.watch/event/india-ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S9
https://app.faicon.ai/ai-impact-summit-2026/ai-for-agriculture-scaling-intelegence-for-food-and-climate-resiliance — So we are happy to have support and assistance from MSSRF in that direction. My final question is to Mr. Shankar Maruwad…
S10
AI for agriculture Scaling Intelegence for food and climate resiliance — Thank you, madam. You have rightly pointed out the need to be more sensitive and while developing systems for inclusivit…
S11
AI Meets Agriculture Building Food Security and Climate Resilien — -Johannes Zutt- Regional Vice President, World Bank
S12
AI Meets Agriculture Building Food Security and Climate Resilien — This discussion focused on using artificial intelligence to enhance food security and climate resilience in agriculture,…
S13
How AI Drives Innovation and Economic Growth — -Johannes Zutt: World Bank representative (referred to as “John” in the discussion)
S14
AI Meets Agriculture Building Food Security and Climate Resilien — -Devendra Fadnavis- Honorable Chief Minister of Maharashtra
S15
AI for agriculture Scaling Intelegence for food and climate resiliance — Mr. Ramesh Chaturvedi, Secretary of Ministry of Agriculture and Farmers Welfare. Sir, please come onto the stage. Our Ho…
S16
AI for agriculture Scaling Intelegence for food and climate resiliance — – Devendra Fadnavis- Dr. Soumya Swaminathan
S17
Shaping the Future AI Strategies for Jobs and Economic Development — I start, we were talking about global north, global south. What struck me in this panel is the women are at the peripher…
S18
Driving Indias AI Future Growth Innovation and Impact — Trust infrastructure is as critical as technical infrastructure, requiring institutional safeguards, transparency, and e…
S19
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leadin…
S20
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-agriculture-scaling-intelegence-for-food-and-climate-resiliance — A very good morning to all of you. Shri Devesh Chaturvedi ji, Rajesh Agarwal ji, Vikas Rastogi ji. Mr. Jonas Jett, Srima…
S21
AI for Social Good Using Technology to Create Real-World Impact — “I mean, the World Bank recently launched AgriConnect initiative.”[46]. “But from the one concrete example, like in Indi…
S22
Building Population-Scale Digital Public Infrastructure for AI — Well, I do look a little bit with envy at Open AgriNet. Having looked across the work that the foundation does in agricu…
S23
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Ganesh describes successful collaboration through a consortium of 9 academic institutions working via a Section 8 not-fo…
S24
National Disaster Management Authority — Shukla’s vision involved creating “living intelligence” through multi-modal AI models capable of processing structured a…
S25
Indias AI Leap Policy to Practice with AIP2 — As Dr. said, this is really the earthquake of AI and we are at the epicenter. And as you can see, after five days, we ar…
S26
Why science metters in global AI governance — Different contexts require different approaches, and governance must ensure all voices are heard, particularly from unde…
S27
Artificial Intelligence &amp; Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S28
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — The discussion aimed to explore how governments and industry can collaborate effectively on AI governance to reduce regu…
S29
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S30
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — Ambassador Verweij argues that AI and digitalization offer enormous opportunities to enhance agricultural productivity w…
S31
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Natalie Cohen, Head of Regulatory Policy for Global Challenges at the OECD, positioned sandboxes within broader regulato…
S32
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — Endorsing a multi-stakeholder approach at regional and continental levels is strongly recommended to maximize the value …
S33
WS #35 Unlocking sandboxes for people and the planet — The level of disagreement among speakers was moderate. While there were clear differences in approaches and perspectives…
S34
AI for agriculture Scaling Intelegence for food and climate resiliance — Maharashtra’s strategic approach represents a shift from pilot projects to population-scale implementation. The state’s …
S35
AI Meets Agriculture Building Food Security and Climate Resilien — Evidence:Because we all collaboratively invested in figuring out a solution there, that solution could be deployed in Bh…
S36
AI Meets Agriculture Building Food Security and Climate Resilien — Chief Minister Devendra Fadnavis presented Maharashtra’s Maha Agri AI Policy 2025-2029, emphasizing the shift from demon…
S37
Building Scalable AI Through Global South Partnerships — Impact:This comment elevated the discussion by providing a philosophical foundation for South-South cooperation based on…
S38
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Bobina Zulfa:Sure. Good morning. Can you hear me? Yes, thanks. Perfect. Morning. It’s morning where I am. I understand i…
S39
Digital Policy Perspectives — Their activities are directed towards enhancing gender equality in the digital space, aligning with the aims of Sustaina…
S40
Building a Digital Society, from Vision to Implementation — This discussion demonstrated that small island developing states, particularly Jamaica, possess significant potential to…
S41
Day 0 Event #257 Enhancing Data Governance in the Public Sector — Moderate disagreement level with significant implications – the speakers largely agree on goals (effective data governan…
S42
Data Policy in the Fourth Industrial Revolution: Insights on personal data — As it relates to data policy, one clear reality is that the complexity of the data ecosystem means policy frameworks bui…
S43
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S44
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Multi-stakeholder partnerships between policy researchers and private sector are essential for surfacing potential harms…
S45
Why science metters in global AI governance — Evidence-based policy making is essential – policies cannot be built on hype or guesswork but must be grounded in facts …
S46
Policymaker’s Guide to International AI Safety Coordination — Teo emphasizes that policymakers must understand what actually works in practice, not just what appears effective on pap…
S47
AI Safety at the Global Level Insights from Digital Ministers Of — Professor Alondra Nelson emphasised the critical importance of establishing scientific ground truth about AI risks amid …
S48
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Fundamental infrastructure challenges—including limited computing power, inadequate connectivity, and capacity gaps—requ…
S49
Adoption of agentic AI slowed by data readiness and governance gaps — Agentic AI is emerging as a new stage ofenterprise automation, enabling systems to reason, plan, and act across workflow…
S50
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — This discussion centered on the launch of a policy report and developer toolkit for building open and responsible voice …
S51
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — Harleen Kaur outlined the policy framework built around four pillars: treating foundational datasets as public goods, in…
S52
Indias AI Leap Policy to Practice with AIP2 — As Dr. said, this is really the earthquake of AI and we are at the epicenter. And as you can see, after five days, we ar…
S53
Indias AI Leap Policy to Practice with AIP2 — Doreen Bogdan-Martin: …as to how AI can actually benefit people in their lives, their homes, their communities, and t…
S54
AI for agriculture Scaling Intelegence for food and climate resiliance — Maharashtra’s strategic approach represents a shift from pilot projects to population-scale implementation. The state’s …
S55
AI for agriculture Scaling Intelegence for food and climate resiliance — Social and economic development | Information and communication technologies for development
S56
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Evidence:He cites specific government support: free GPUs provided to citizens, funding for model development, and openin…
S57
AI Meets Agriculture Building Food Security and Climate Resilien — Evidence:As Honorable PM said in his inaugural session, AI must be built on trusted data, ethical governance and public …
S58
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Evidence:India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and p…
S59
AI Meets Agriculture Building Food Security and Climate Resilien — “AI must be transparent, auditable, and explainable”[96]. “Without trust, scale will not happen”[99]. “based on open sta…
S60
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — The Organisation for Economic Cooperation and Development (OECD) has developed a set of principles aimed at guiding resp…
S62
DC-SIG Involving Schools of Internet Governance in achieving SDGs | IGF 2023 — Women’s inclusion contributes to gender balance
S63
Artificial Intelligence &amp; Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S64
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — The discussion aimed to explore how governments and industry can collaborate effectively on AI governance to reduce regu…
S65
AI Collaboration Across Borders_ India–Israel Innovation Roundtable — So we are currently on a mission to form a global ecosystem across investors, entrepreneurs, executives, researchers, fo…
S66
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S67
Partner2Connect High-Level Dialogue — The tone was consistently optimistic and collaborative throughout the discussion. It began with celebratory announcement…
S68
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S69
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S70
World Economic Forum Town Hall on AI Ethics and Trust — The discussion maintained a serious, critical tone throughout, with panelists expressing genuine concern and urgency abo…
S71
Delegated decisions, amplified risks: Charting a secure future for agentic AI — The tone was consistently critical and cautionary throughout, with Whittaker maintaining a technically informed but acce…
S72
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S73
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S74
Taking Stock — The discussion maintained a constructive and appreciative tone throughout, with participants consistently thanking the N…
S75
New Technologies and the Impact on Human Rights — The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes con…
S76
Law, Tech, Humanity, and Trust — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. The speakers demon…
S77
Towards Parity in Power / DAVOS 2025 — The tone was primarily serious and analytical, with panelists providing data, personal experiences, and policy recommend…
S78
Welfare for All Ensuring Equitable AI in the Worlds Democracies — The conversation maintained an optimistic and collaborative tone throughout, with participants sharing practical solutio…
S79
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S80
Open Forum #42 Global Digital Cooperation: Ambition to Country-Level Action — The tone of the discussion was generally constructive and collaborative, with panelists highlighting ongoing efforts and…
S81
Open Forum #7 Deepen Cooperation on Governance, Bridge the Digital Divide — The overall tone was collaborative, optimistic and forward-looking. Speakers shared positive examples and experiences fr…
S82
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S83
Keynote-HE Emmanuel Macron — Overall Tone:The tone is consistently optimistic, collaborative, and aspirational throughout. Macron maintains an enthus…
S84
AI Innovation in India — The tone was consistently celebratory, inspirational, and optimistic throughout the discussion. Speakers expressed pride…
S85
AI for Good – food and agriculture — The tone was consistently optimistic and forward-looking throughout, with speakers expressing enthusiasm about AI’s pote…
S86
Closing remarks — This comment is powerful because it creates a generational identity and responsibility. The repetition emphasizes urgenc…
S87
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — The tone is optimistic and educational throughout, with the speaker maintaining an encouraging perspective on technology…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Devendra Fadnavis
6 arguments106 words per minute1101 words619 seconds
Argument 1
Positioning AI as central to food security, farmer incomes and climate resilience
EXPLANATION
The Chief Minister argues that agriculture faces mounting pressures from climate change, resource scarcity and volatile markets, and that AI can be the key technology to secure food supplies, raise farmer earnings and enhance climate adaptability. He frames AI as essential for national stability and inclusive growth.
EVIDENCE
He highlighted the multiple stresses on food systems, noting climate volatility, falling water tables, deteriorating soil health, fragile supply chains and unpredictable global markets, which together threaten agriculture (lines 41-48). He then linked India’s AI mission to delivering inclusion, transparency and scale, emphasizing that agriculture must sit at its heart to support half a billion people dependent on the sector (lines 50-52).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The scaling intelligence report describes AI as the key to food and nutrition security, higher farmer incomes and a stable economy, aligning with the Chief Minister’s positioning [S4].
MAJOR DISCUSSION POINT
AI as a cornerstone for food security and climate resilience
Argument 2
Mahavistar/Bharatvistar delivering multilingual, personalized advisories to over 2.5 million farmers, demonstrating readiness for AI at scale
EXPLANATION
Fadnavis describes the AI‑powered Mahavistar platform as a nationwide mobile service that provides farmers with localized advice, market data, pest alerts and access to government schemes in multiple languages. The high adoption rate shows that farmers are prepared to use AI when it is tailored to their needs.
EVIDENCE
He cited the Mahavistar mobile platform’s multilingual, personalized advisories, market intelligence and pest alerts, noting more than 2.5 million downloads and its role as a “digital friend” to farmers (lines 61-64). Earlier he also referenced Mahavistar’s use by over 2.5 million farmers for Marathi and tribal‑language advisories (lines 23-24).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same report confirms that the Mahavistar platform has reached more than 2.5 million farmers with multilingual, personalized advisories, market intelligence and pest alerts [S4].
MAJOR DISCUSSION POINT
Scale of AI‑driven advisory services
AGREED WITH
Dr. Devesh Chaturvedi, Vikas Chandra Rastogi
Argument 3
Emphasis on trusted data, ethical AI governance, transparency, auditability and explainability as prerequisites for scaling
EXPLANATION
He stresses that AI must be built on reliable data and governed by ethical standards, with mechanisms for transparency, auditability and explainability, otherwise large‑scale deployment will fail. Trust is presented as the foundation for scaling AI in agriculture.
EVIDENCE
He quoted the Prime Minister’s statement that AI must rest on trusted data, ethical governance and public accountability, warning that without trust scale will not happen (lines 54-56). Later he listed the four pillars for AI‑2026, including transparency, auditability and explainability (lines 79-80).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Prime Minister’s inaugural remarks stress that AI must rest on trusted data, ethical governance and public accountability, and a separate analysis highlights trust infrastructure as critical for AI scaling [S1][S18].
MAJOR DISCUSSION POINT
Governance and trust for AI scaling
AGREED WITH
Shankar Maruwada, Dr. Devesh Chaturvedi
Argument 4
Declaring gender equity as a core mantra; promoting women‑centric AI use cases and supporting women‑led FPOs/SHGs
EXPLANATION
Fadnavis declares that gender equity will be a guiding principle for AI in agriculture, urging the design of solutions with women farmers and encouraging the development of women‑led farmer producer organisations and self‑help groups. He links this to the international year of women in agriculture.
EVIDENCE
He noted that AI‑2026 is the international year of women in agriculture and that AI solutions must be designed with women farmers, not merely for them (lines 84-86).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI meets agriculture brief notes that 2026 is the International Year of Women in Agriculture and calls for AI solutions to be designed with women farmers, not merely for them [S1][S17].
MAJOR DISCUSSION POINT
Gender equity in AI agriculture
AGREED WITH
Dr. Soumya Swaminathan, Shankar Maruwada
Argument 5
Collaboration with World Bank, venture capital, impact investors and philanthropic foundations to fund and scale agri‑tech startups
EXPLANATION
He calls on a broad set of investors—including multilateral development banks, venture capital funds, impact investors and philanthropic foundations—to partner with Maharashtra’s agri‑innovation ecosystem, providing capital and expertise needed to move AI solutions from pilots to market‑ready platforms.
EVIDENCE
He described a global call for AI use cases in agriculture, the release of a compendium of real‑world AI applications, and invited venture capital funds, impact investors, multilateral development banks and philanthropic foundations to collaborate (lines 74-78, 87-89).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
World Bank’s AgriConnect initiative and its invitation to partners is referenced, and the scaling intelligence report mentions a global call for venture capital, impact investors and philanthropic foundations to engage with Maharashtra’s ecosystem [S21][S4].
MAJOR DISCUSSION POINT
Financing the agri‑tech ecosystem
AGREED WITH
Johannes Zutt
Argument 6
Publication of a global compendium of AI agriculture applications and invitation to international investors to engage with Maharashtra’s ecosystem
EXPLANATION
Fadnavis mentions that a comprehensive compendium documenting successful AI deployments worldwide has been released, showcasing Maharashtra’s leadership and inviting global investors to participate in its agri‑AI initiatives.
EVIDENCE
He noted that the compendium of real‑world AI applications in agriculture was released in Delhi on 17 February 2026, covering successes from Africa, Asia and Latin America (lines 75-78). He also reiterated the invitation to venture capital, impact investors and development partners to engage with Maharashtra (lines 87-89).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The scaling intelligence report states that a compendium of real-world AI applications in agriculture was released in Delhi on 17 February 2026, showcasing successes worldwide [S4].
MAJOR DISCUSSION POINT
Showcasing global AI use cases and attracting investment
AGREED WITH
Johannes Zutt, Vikas Chandra Rastogi
V
Vikas Chandra Rastogi
2 arguments110 words per minute1813 words985 seconds
Argument 1
Launch of Maha Agri AI Policy 2025‑2029 to embed AI across advisory, market, traceability and research services
EXPLANATION
Rastogi announces Maharashtra’s five‑year AI policy, which integrates artificial intelligence into a wide range of agricultural services, from advisory and market information to product traceability and research capacity building. The policy moves AI initiatives from pilots to full‑scale projects.
EVIDENCE
He stated that under the Chief Minister’s leadership, the state launched the Maha Agri AI Policy 2025‑2029, which uses AI for pharma advisory services, market information, data exchange, product traceability, innovation and research, and capacity building (lines 20-22). He further described the policy’s scope covering advisory, market, traceability and research services (lines 21-23).
MAJOR DISCUSSION POINT
Policy framework for AI in agriculture
AGREED WITH
Devendra Fadnavis, Dr. Devesh Chaturvedi
Argument 2
Agestep Foundation’s contribution to building open‑source digital public infrastructure that underpins AI services for farmers
EXPLANATION
Rastogi highlights the role of the Agestep Foundation (through its ECSTEP arm) in creating open‑source platforms such as Sunbird, which power large‑scale systems like Mahavistar and support the broader digital public infrastructure needed for AI‑driven agriculture.
EVIDENCE
He noted that ECSTEP has played a foundational role in shaping India’s DPI landscape through open‑source platforms such as Sunbird, which have powered large‑scale systems like Mahavistar (lines 266-267).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Open AgriNet discussion highlights the foundation’s role in creating open-source platforms such as Sunbird that power large-scale systems like Mahavistar, underscoring its contribution to digital public infrastructure [S22].
MAJOR DISCUSSION POINT
Open‑source infrastructure for AI agriculture
D
Dr. Devesh Chaturvedi
3 arguments163 words per minute1127 words414 seconds
Argument 1
Proposal for a central‑state collaboration model that aligns AI deployments with national architecture while allowing state‑level innovation
EXPLANATION
Chaturvedi outlines a framework where the central government provides a common AI architecture and data standards, while states retain flexibility to tailor solutions to local agro‑climatic and socio‑economic conditions. This model seeks to ensure coherence across India while fostering state‑specific innovation.
EVIDENCE
He described the collaborative work on farmer IDs and crop surveys, emphasizing that once these foundational data elements are in place, AI can deliver highly tailored advisories based on consented farmer information, enabling state‑specific customization within a national architecture (lines 140-152).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI meets agriculture brief outlines a central-state collaboration framework that leverages the 9 crore farmer IDs while permitting states to innovate locally [S5].
MAJOR DISCUSSION POINT
Central‑state AI collaboration
AGREED WITH
Devendra Fadnavis, Vikas Chandra Rastogi
Argument 2
Creation of unique farmer IDs and a consent‑driven agri‑data exchange that enable hyper‑local, tailored AI recommendations
EXPLANATION
He explains that nearly nine crore farmer IDs have been generated, each linked to land, crop, and soil data, forming the backbone of a consent‑based data exchange. This infrastructure allows AI systems to provide hyper‑local, personalized recommendations to farmers.
EVIDENCE
He detailed that close to 9 crore farmer IDs have been developed, each containing crop, land and soil health information, and that this data, accessed with farmer consent, enables targeted AI advisories (lines 140-149). He also referenced the Maha AgEx, an open, federated, consent‑driven architecture for data exchange that brings diverse datasets together (lines 68-70).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same brief details the generation of close to 9 crore farmer IDs linked to land, crop and soil data, forming the backbone of a consent-driven data exchange for hyper-local AI advisories [S5].
MAJOR DISCUSSION POINT
Farmer IDs and data exchange for AI
Argument 3
Addressing “digital red‑tapism” through unified platforms, strong data governance and consent‑based data sharing
EXPLANATION
Chaturvedi points out that fragmented digital services have created a new form of red‑tapism, where farmers must navigate multiple apps and databases. He advocates for a single, unified AI platform with robust data governance and consent mechanisms to simplify access to services.
EVIDENCE
He described how multiple schemes and departments each maintain separate apps and databases, leading to confusion for farmers and creating “digital red‑tapism” (lines 131-135). He also highlighted the need for strong data governance and consent‑driven sharing to avoid exploitation (lines 68-70).
MAJOR DISCUSSION POINT
Simplifying digital services for farmers
AGREED WITH
Shankar Maruwada, Devendra Fadnavis
D
Dr. Soumya Swaminathan
1 argument173 words per minute1125 words388 seconds
Argument 1
Ensuring AI solutions are co‑designed with women farmers, incorporate women’s land‑ownership data, and reduce drudgery through targeted advisories
EXPLANATION
Swaminathan stresses that AI must be built with women’s participation from the outset, integrating their land‑ownership status and addressing the specific tasks they perform. She proposes measuring success by reductions in women’s workload and ensuring that AI recommendations are relevant to women’s farming contexts.
EVIDENCE
She noted that only a minority of women have land titles, which risks exclusion from data‑driven services, and argued for early incorporation of women’s data to avoid bias (lines 227-230). She also highlighted the need to reduce drudgery for women, especially in tribal and millet‑growing areas, and suggested using indicators such as workload reduction to assess impact (lines 231-236). She gave the example of a Fisher Women app that was created after the original Fisher Men app was found to exclude women (lines 240-244).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI meets agriculture document emphasizes co-design with women farmers, inclusion of women’s land-ownership information, and the need to reduce women’s workload through AI-driven advisories [S1][S17].
MAJOR DISCUSSION POINT
Gender‑responsive AI design
AGREED WITH
Devendra Fadnavis, Shankar Maruwada
S
Shankar Maruwada
3 arguments133 words per minute1259 words567 seconds
Argument 1
Adoption of open, interoperable protocols (e.g., Beacon) and “shared rails” to allow any state or private app to plug into a common AI ecosystem
EXPLANATION
Maruwada explains that by using open standards such as the Beacon protocol, India can create a shared digital “railway” that lets states and private developers connect their data and applications to a common AI backbone, fostering rapid diffusion and avoiding siloed solutions.
EVIDENCE
He described the creation of interoperable networks based on open protocols like Beacon, enabling collaboration where different actors contribute data, technology or policy, and allowing any state or private app to plug into the network (lines 304-311). He also gave the example of a tomato‑watering app that could be made available across states through these shared rails (lines 310-311).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Open AgriNet discussion describes the use of open, interoperable protocols such as Beacon to create shared digital rails that enable diverse actors to connect to a common AI backbone [S22].
MAJOR DISCUSSION POINT
Open standards for AI ecosystem integration
AGREED WITH
Dr. Devesh Chaturvedi, Devendra Fadnavis
Argument 2
Applying DPI principles—open standards, network‑based architecture, and continuous model improvement—to AI systems for reliability and sustainability
EXPLANATION
He argues that the same design principles that made India’s Digital Public Infrastructure successful—open standards, network‑centric architecture, and iterative model refinement—should be applied to AI deployments to ensure they are reliable, scalable and sustainable over time.
EVIDENCE
He linked the success of DPI to AI by emphasizing open, interoperable networks, collaborative development, and a minimum viable product approach that evolves with better data and models (lines 304-311). He also highlighted the analogy with India’s railway system as a backbone that supports diverse applications (lines 306-308).
MAJOR DISCUSSION POINT
DPI‑inspired AI governance
Argument 3
Designing Mahavistar for illiterate and low‑resource users via voice interaction in native dialects, directly addressing barriers faced by many women farmers
EXPLANATION
Maruwada details how Mahavistar was deliberately built to work on basic feature phones, using voice interaction in the farmer’s native language or dialect, so that even illiterate users can obtain personalized advice. This design choice targets the most marginalized, including many women farmers.
EVIDENCE
He explained that the design specifications required an illiterate farmer to be able to interact via a feature phone in their native dialect, using voice‑based conversation to ask questions and receive answers, a process that took nine months to develop (lines 289-293).
MAJOR DISCUSSION POINT
Inclusive design for low‑literacy farmers
AGREED WITH
Devendra Fadnavis, Dr. Soumya Swaminathan
J
Johannes Zutt
3 arguments143 words per minute907 words377 seconds
Argument 1
Private sector’s creative capacity to develop niche AI applications; need for “crowding‑in” capital and sandbox environments for rapid experimentation
EXPLANATION
Zutt emphasizes that the private sector brings creativity and niche expertise to build AI tools for specific farmer challenges. He calls for mechanisms that attract private capital and provide sandbox environments where innovators can test, iterate and scale solutions responsibly.
EVIDENCE
He noted that the private sector’s creativity results in many applications aimed at farmer hurdles, and that encouraging private investment (“crowding‑in”) is essential (lines 194-200). He illustrated this with a tomato‑watering app from Morocco that uses a picture to estimate water needs (lines 202-204).
MAJOR DISCUSSION POINT
Leveraging private innovation for AI agriculture
AGREED WITH
Devendra Fadnavis
Argument 2
AI Impact Summit and AI for Agri 2026 conference as platforms to share use‑cases, foster partnerships and accelerate South‑South learning
EXPLANATION
Zutt points to the AI Impact Summit and the upcoming AI for Agri 2026 conference as key venues for showcasing AI applications, building partnerships, and facilitating knowledge exchange among developing countries, thereby strengthening South‑South collaboration.
EVIDENCE
He responded to a question about the role of such platforms by stating that India’s position enables it to lead AI development for the global south, and that events like the AI Impact Summit provide opportunities for sharing use‑cases and fostering partnerships (lines 211-214).
MAJOR DISCUSSION POINT
Global forums for AI agriculture collaboration
AGREED WITH
Devendra Fadnavis, Vikas Chandra Rastogi
Argument 3
India’s diverse agro‑ecological landscape as a testbed whose lessons can be exported to other developing nations, positioning the country as a hub for global AI‑agri collaboration
EXPLANATION
He argues that because India encompasses a vast array of climates, languages, crops and farming conditions, successful AI solutions developed here will generate spill‑over learnings applicable worldwide, establishing India as a central hub for South‑South AI‑agri knowledge transfer.
EVIDENCE
He described India as a microcosm of the world, noting its many languages, regions, crops and varied farm‑level conditions, and asserted that solving AI challenges in India will automatically generate lessons for other countries (lines 215-218).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI meets agriculture brief characterises India as a microcosm of the world with varied languages, regions, crops and farm conditions, positioning it as a testbed for AI solutions whose lessons can benefit other countries [S5].
MAJOR DISCUSSION POINT
India as a global AI‑agri testbed
Agreements
Agreement Points
AI is positioned as central to food security, farmer incomes and climate resilience
Speakers: Devendra Fadnavis
Positioning AI as central to food security, farmer incomes and climate resilience
The Chief Minister emphasizes that agriculture faces climate volatility, water scarcity, soil degradation and fragile supply chains, and that AI is essential to secure food, raise farmer earnings and enhance climate adaptability [41-48].
POLICY CONTEXT (KNOWLEDGE BASE)
National AI strategies such as Maharashtra’s Maha Agri AI Policy 2025-2029 frame AI as a driver for food security, farmer income growth and climate resilience, echoing broader evidence-based policy calls for AI in agriculture [S34][S35][S36].
Mahavistar/Bharatvistar delivering multilingual, personalized advisories to over 2.5 million farmers, demonstrating readiness for AI at scale
Speakers: Devendra Fadnavis, Dr. Devesh Chaturvedi, Vikas Chandra Rastogi
Mahavistar/Bharatvistar delivering multilingual, personalized advisories to over 2.5 million farmers, demonstrating readiness for AI at scale Proposal for a central‑state collaboration model that aligns AI deployments with national architecture while allowing state‑level innovation Launch of Maha Agri AI Policy 2025‑2029 to embed AI across advisory, market, traceability and research services
All speakers point to the AI-powered Mahavistar (and its national counterpart Bharatvistar) as a platform that already reaches more than 2.5 million farmers with multilingual, hyper-local advisories, market data and scheme access, showing that large-scale AI deployment is feasible [23-24][61-64][128-130].
POLICY CONTEXT (KNOWLEDGE BASE)
The Mahavistar platform’s rollout to more than 2.5 million farmers, with multilingual advisories and market intelligence, has been highlighted as a flagship example of scaling AI in agriculture in recent policy briefs [S34][S35][S36].
Trusted data, ethical governance, transparency, auditability and explainability are prerequisites for scaling AI in agriculture
Speakers: Devendra Fadnavis, Dr. Devesh Chaturvedi, Shankar Maruwada
Emphasis on trusted data, ethical AI governance, transparency, auditability and explainability as prerequisites for scaling Addressing “digital red‑tapism” through unified platforms, strong data governance and consent‑based data sharing Adoption of open, interoperable protocols (e.g., Beacon) and “shared rails” to allow any state or private app to plug into a common AI ecosystem
Fadnavis stresses that AI must rest on trusted data and ethical governance; Chaturvedi warns against fragmented digital services and calls for unified, consent-driven platforms; Maruwada proposes open standards and shared rails to ensure interoperability and trust [54-56][79-80][68-70][304-311].
POLICY CONTEXT (KNOWLEDGE BASE)
Data-governance frameworks emphasizing transparency, auditability and explainability are advocated in multistakeholder policy discussions on public-sector data and AI, such as the Day 0 data governance event and OECD sandbox guidance [S41][S42][S44].
Gender equity and inclusion of women farmers are core to AI‑driven agricultural transformation
Speakers: Devendra Fadnavis, Dr. Soumya Swaminathan, Shankar Maruwada
Declaring gender equity as a core mantra; promoting women‑centric AI use cases and supporting women‑led FPOs/SHGs Ensuring AI solutions are co‑designed with women farmers, incorporate women’s land‑ownership data, and reduce drudgery through targeted advisories Designing Mahavistar for illiterate and low‑resource users via voice interaction in native dialects, directly addressing barriers faced by many women farmers
All three speakers underline that AI must be designed with women’s participation, incorporate their land-ownership status, reduce drudgery, and be accessible through voice-based, low-literacy interfaces, making gender equity a guiding principle [84-86][227-236][289-293].
POLICY CONTEXT (KNOWLEDGE BASE)
Gender-inclusive AI design is underscored in IGF 2023 discussions and digital policy analyses linking AI deployment to SDG 5 objectives on women’s empowerment [S38][S39].
Collaboration with private sector, investors and development partners is essential to fund and scale agri‑tech AI solutions
Speakers: Devendra Fadnavis, Johannes Zutt
Collaboration with World Bank, venture capital, impact investors and philanthropic foundations to fund and scale agri‑tech startups Private sector’s creative capacity to develop niche AI applications; need for “crowding‑in” capital and sandbox environments for rapid experimentation
Fadnavis invites venture capital, multilateral banks and foundations to partner with Maharashtra; Zutt highlights the private sector’s creativity and the need for capital and sandbox environments to test and scale niche AI tools [87-89][194-200].
POLICY CONTEXT (KNOWLEDGE BASE)
Public-private partnerships involving World Bank, Google and research institutes have been cited as key enablers for scaling AI-driven agri-services in Maharashtra’s AI agenda [S35][S44].
South‑South knowledge exchange and global collaboration through AI Impact Summit and AI for Agri 2026 conference
Speakers: Devendra Fadnavis, Johannes Zutt, Vikas Chandra Rastogi
Publication of a global compendium of AI agriculture applications and invitation to international investors to engage with Maharashtra’s ecosystem AI Impact Summit and AI for Agri 2026 conference as platforms to share use‑cases, foster partnerships and accelerate South‑South learning How do we see a platform such as AI Impact Summit as well as AI for Agri global conference contributing to deeper global collaboration and south‑south knowledge exchange in this domain?
Fadnavis mentions a compendium of worldwide AI-agri use cases and calls for global investors; Zutt stresses that the AI Impact Summit and AI for Agri 2026 will enable South-South learning; Rastogi asks how these platforms can deepen collaboration, indicating shared commitment to global knowledge exchange [74-78][211-218][207-210].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for South-South cooperation in AI for agriculture were articulated in recent forums highlighting mutual learning and shared historical experience among Global South nations [S37].
Adoption of open, interoperable protocols and DPI‑inspired principles for AI ecosystem sustainability
Speakers: Shankar Maruwada, Dr. Devesh Chaturvedi, Devendra Fadnavis
Adoption of open, interoperable protocols (e.g., Beacon) and “shared rails” to allow any state or private app to plug into a common AI ecosystem Addressing “digital red‑tapism” through unified platforms, strong data governance and consent‑based data sharing Emphasis on trusted data, ethical AI governance, transparency, auditability and explainability as prerequisites for scaling
Maruwada proposes open standards (Beacon) and shared rails; Chaturvedi calls for unified, consent-driven platforms to avoid digital red-tapism; Fadnavis stresses openness and interoperability in the policy-led ecosystem, together forming a consensus on open, DPI-inspired AI architecture [304-311][68-70][59-60].
POLICY CONTEXT (KNOWLEDGE BASE)
The Bhashini stack policy framework promotes open, interoperable data and DPI-style principles for sustainable AI ecosystems, illustrating a concrete model for open-source AI infrastructure [S51][S50].
Similar Viewpoints
Both speakers advocate a policy‑driven, openness‑based ecosystem where the centre provides a common architecture and data standards, while states retain flexibility to innovate locally, ensuring trust and scalability of AI solutions [54-56][79-80][59-60][41-48].
Speakers: Devendra Fadnavis, Dr. Devesh Chaturvedi
Emphasis on trusted data, ethical AI governance, transparency, auditability and explainability as prerequisites for scaling Proposal for a central‑state collaboration model that aligns AI deployments with national architecture while allowing state‑level innovation
Both emphasize inclusive design that directly tackles the constraints faced by women farmers—through co‑design, land‑ownership data integration, workload reduction, and voice‑based interfaces for low‑literacy users [227-236][289-293].
Speakers: Dr. Soumya Swaminathan, Shankar Maruwada
Ensuring AI solutions are co‑designed with women farmers, incorporate women’s land‑ownership data, and reduce drudgery through targeted advisories Designing Mahavistar for illiterate and low‑resource users via voice interaction in native dialects, directly addressing barriers faced by many women farmers
Unexpected Consensus
Recognition that AI is not a magic solution and must be grounded in credible, trustworthy information
Speakers: Devendra Fadnavis, Johannes Zutt
But let me emphasize, AI is not a magic. Ensuring that the research and extension that is provided through these small AI platforms, is credible, is trustworthy, is backed by science.
While a government leader and a World Bank representative might be expected to focus on policy and financing respectively, both explicitly caution that AI alone cannot solve problems and must be underpinned by reliable, science-based advice, revealing a shared pragmatic stance [53][186-190].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs stress evidence-based AI governance, warning against hype and emphasizing scientific grounding for AI risk assessment [S45][S47].
Overall Assessment

The participants show strong convergence on five major themes: (1) AI as a strategic lever for food security and climate resilience; (2) proven large‑scale AI advisory platforms (Mahavistar/Bharatvistar); (3) the necessity of open, interoperable, trusted data infrastructures; (4) gender‑inclusive design and empowerment of women farmers; (5) the importance of private‑sector innovation, financing and South‑South collaboration.

High consensus – the speakers from government, academia, and international development align on policy direction, technical architecture, inclusion, and financing, indicating a coordinated roadmap that can accelerate responsible AI deployment in Indian agriculture and serve as a model for other developing nations.

Differences
Different Viewpoints
Pace of AI deployment versus readiness of foundational data infrastructure
Speakers: Devendra Fadnavis, Dr. Devesh Chaturvedi
AI must not remain confined to demonstrations or pilots. It must reach millions. We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution, from intention to investment. (lines 57-58, 101-102) We need to first build farmer IDs and a consent‑driven data exchange; only then can AI deliver hyper‑local, tailored advisories (lines 140-152)
Fadnavis pushes for rapid scaling of AI solutions to reach millions of farmers, emphasizing a shift from pilots to full-scale platforms. Devesh argues that scaling should follow the establishment of a robust, consent-based farmer ID system and data exchange, suggesting a more measured rollout after foundational data infrastructure is in place. The two positions differ on whether speed or data readiness should be prioritized. [57-58][101-102][140-152]
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses note a gap between rapid AI rollout and lagging data infrastructure, citing challenges in computing power, connectivity and data readiness that slow adoption [S48][S49][S41].
Role of the private sector and need for sandbox environments versus public‑led governance and trust
Speakers: Johannes Zutt, Devendra Fadnavis
The private sector brings creativity; we need to crowd‑in capital and provide sandbox environments for rapid experimentation and scaling of niche AI applications. (lines 194-200) AI must be built on trusted data, ethical governance and public accountability; without trust, scale will not happen. (lines 54-56) and the state invites investors within a framework of public oversight. (lines 87-89)
Zutt calls for agile, private-sector-driven innovation with sandbox mechanisms and crowd-in funding. Fadnavis stresses that AI must rest on trusted data and ethical public governance, warning that scaling without trust will fail, and frames private investment within a state-led, accountable structure. This reflects a tension between a market-driven rapid-innovation model and a governance-centric, trust-first approach. [194-200][54-56][87-89]
POLICY CONTEXT (KNOWLEDGE BASE)
OECD and Datasphere Initiative discussions highlight divergent views on sandbox use, private-sector involvement and public trust, recommending flexible, multi-stakeholder regulatory experimentation [S31][S32][S33].
Concrete steps for gender‑inclusive AI design, especially inclusion of women’s land‑ownership data
Speakers: Dr. Soumya Swaminathan, Devendra Fadnavis
Only a minority of women have land titles; AI systems that rely on publicly available data will exclude them. Early incorporation of women’s land‑ownership data is essential to avoid bias. (lines 227-230) AI solutions must be designed with women farmers, not merely for them, aligning with the International Year of Women in Agriculture. (lines 84-86)
Swaminathan stresses the need to embed women’s land-ownership information into AI datasets to prevent exclusion, while Fadnavis emphasizes designing AI solutions with women farmers but does not specify data-inclusion mechanisms. The disagreement lies in the level of concrete data integration required for gender-responsive AI. [227-230][84-86]
POLICY CONTEXT (KNOWLEDGE BASE)
Gender-focused digital rights research calls for explicit mechanisms to integrate women’s land-ownership data into AI systems, aligning with SDG 5 policy recommendations [S38][S39].
Unexpected Differences
Agility of development partnerships versus structured, consent‑driven data frameworks
Speakers: Johannes Zutt, Dr. Devesh Chaturvedi
Zutt advocates for just‑in‑time, agile support models that enable rapid experimentation and scaling. (lines 168-169) Devesh stresses a consent‑driven, centrally coordinated data exchange to avoid ‘digital red‑tapism’ and ensure trustworthy AI services. (lines 68-70, 131-135)
While both aim to accelerate AI adoption, Zutt’s call for flexible, fast-moving partnership models contrasts with Devesh’s emphasis on a structured, consent-based data architecture, revealing an unexpected tension between speed and procedural rigor. [168-169][68-70][131-135]
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on data governance stress the tension between fast-moving development partnerships and the need for consent-based, structured data frameworks to ensure accountability [S41][S42][S44].
Overall Assessment

The discussion shows broad consensus on the strategic importance of AI for agriculture, climate resilience, and inclusive growth. However, key disagreements emerge around the optimal pace of rollout versus readiness of data infrastructure, the balance between private‑sector agility and public‑trust governance, and the depth of gender‑responsive data integration. These divergences reflect differing priorities—speed and investment versus foundational data governance and concrete inclusion measures—potentially affecting how quickly and equitably AI solutions can be deployed at scale.

Moderate disagreement: while all participants share common goals, the differing viewpoints on implementation strategies could lead to fragmented policies if not reconciled, underscoring the need for coordinated frameworks that balance rapid scaling, robust data governance, and gender‑inclusive design.

Partial Agreements
The speakers concur that open standards and interoperable networks are essential for building a nationwide AI backbone, though they differ on implementation timelines and governance details. [68-70][304-311]
Speakers: Devendra Fadnavis, Dr. Devesh Chaturvedi, Shankar Maruwada
All endorse open, interoperable standards and a shared data exchange architecture (e.g., Maha AgEx, Beacon protocol) to enable scalable AI ecosystems. (lines 68-70, 304-311)
All agree that AI can address climate‑related agricultural challenges and improve livelihoods, even though they propose different pathways to achieve this outcome. [41-48][9-12]
Speakers: Devendra Fadnavis, Vikas Chandra Rastogi, Dr. Devesh Chaturvedi
AI is positioned as a key tool for food security, farmer incomes and climate resilience. (lines 41-48, 9-12)
Takeaways
Key takeaways
AI is positioned as a core driver for food security, farmer incomes and climate resilience in India, with Maharashtra leading through the Maha Agri AI Policy 2025‑2029. Large‑scale AI platforms such as Mahavistar and Bharatvistar are already delivering multilingual, personalized advisories to over 2.5 million farmers, proving readiness for nationwide scaling. A unified digital public infrastructure—farmer IDs, consent‑driven agri‑data exchange, and open interoperable protocols (e.g., Beacon)—is essential to provide hyper‑local, trustworthy AI services. Governance principles—trusted data, ethical AI, transparency, auditability, explainability, and strong data stewardship—are prerequisites for scaling AI in agriculture. Inclusion of women farmers and smallholders must be built into AI design, data collection, and service delivery, with emphasis on reducing drudgery and ensuring gender‑equitable outcomes. Private sector innovation, development partners (World Bank, Gates Foundation, etc.), and impact investors are critical for rapid experimentation, financing, and scaling of agri‑tech solutions. Global forums such as the AI Impact Summit and AI for Agri 2026 conference are intended to catalyze South‑South knowledge exchange and showcase scalable use‑cases.
Resolutions and action items
Formal launch and implementation of the Maha Agri AI Policy 2025‑2029 across Maharashtra. Scale Mahavistar/Bharatvistar to cover all major Indian languages, including tribal languages, within the next 3‑6 months. Complete saturation of unique farmer IDs and crop surveys for ~9 crore farmers and integrate them with AI advisory engines. Deploy the Maha AgEx consent‑driven agri‑data exchange as a statewide interoperable platform. Release and promote the global compendium of AI‑in‑agriculture use cases (published 17 Feb 2026). Organise the AI for Agri 2026 conference in Mumbai (22‑23 Feb) to deepen operational discussions and attract investors. Establish joint initiatives between Maharashtra, MSSRF, and other research institutions to embed women’s land‑ownership data and co‑design AI tools for women farmers. Create sandbox mechanisms and “just‑in‑time” technical assistance frameworks with development partners (World Bank) for rapid prototyping and scaling. Develop a traceability Digital Public Infrastructure (DPI) blueprint (www.fema.gov) for end‑to‑end value‑chain visibility. Invite venture capital, impact investors, multilateral development banks, and philanthropic foundations to fund agri‑tech startups within the Maharashtra ecosystem.
Unresolved issues
How to systematically incorporate women’s land‑ownership and other gender‑disaggregated data into the national farmer‑ID system to avoid exclusion. Specific standards and certification processes for AI model auditability, bias detection, and continuous monitoring across states. Mechanisms to ensure affordable connectivity and device access for low‑resource and illiterate farmers, especially in remote tribal areas. Clear delineation of responsibilities and funding flows between central and state governments for ongoing AI platform maintenance and upgrades. Framework for evaluating AI interventions’ impact on drudgery reduction, productivity gains, and unintended socio‑economic risks. Long‑term sustainability model for the open‑source AI infrastructure (e.g., funding, governance, community stewardship).
Suggested compromises
Adopt a central‑state collaboration model that sets national AI architecture and standards while granting states flexibility to tailor solutions to local agro‑climatic and socio‑economic contexts. Implement “shared rails” – an open, interoperable network where both public services and private applications can plug in, balancing innovation with common governance. Use AI as a supplement to, not a replacement for, traditional extension services, preserving employment while enhancing reach. Combine rapid sandbox experimentation with a phased rollout approach: start with minimum viable AI services, then iteratively improve models and data quality. Balance open data principles with consent‑driven privacy safeguards to protect farmer data while enabling ecosystem innovation.
Thought Provoking Comments
AI is not a magic. As Honorable PM said, AI must be built on trusted data, ethical governance, and public accountability. Without trust, scale will not happen.
Highlights that technology alone cannot solve problems; emphasizes the foundational role of trust, ethics, and governance for large‑scale AI adoption in agriculture.
Set the tone for the discussion on responsible AI, prompting subsequent speakers to address data governance, transparency, and accountability throughout the panel.
Speaker: Devendra Fadnavis
The farmer ID is the new UPI – a unique identifier that links land, crops, soil health and scheme eligibility, enabling a single‑platform AI advisory system.
Introduces a concrete, nation‑wide digital infrastructure concept that can unify fragmented services and serve as the backbone for AI‑driven advisories.
Shifted the conversation from abstract AI benefits to concrete implementation mechanics, leading others to discuss interoperability, consent‑driven data sharing, and scaling of services.
Speaker: Dr. Devesh Chaturvedi
We need to let a thousand flowers bloom – encourage private‑sector creativity while the government ensures governance, interoperability, and credibility of AI services.
Frames the ecosystem as a partnership where regulation and innovation coexist, and stresses the need for ‘truth‑testing’ AI outputs before they reach farmers.
Prompted panelists to explore the balance between regulation and market‑driven innovation, and reinforced the importance of validation mechanisms for AI tools.
Speaker: Johannes Zutt
Every technology is neither pro‑poor nor pro‑rich; it is how we use it. We must embed women’s data early, ensure AI reduces drudgery for women farmers, and keep humans in the loop, just like clinical trials evaluate medical products.
Connects AI deployment with gender equity, data inclusion, and rigorous evaluation, drawing a parallel with scientific trial methodology.
Deepened the gender‑inclusion thread, leading to concrete examples (Women Connect app) and reinforcing the call for iterative, evidence‑based rollout of AI solutions.
Speaker: Dr. Soumya Swaminathan
The lesson from Haber‑Bosch and the railways: build open, interoperable ‘digital rails’ that let any state or private player plug in AI services, starting with a minimum viable product that improves over time.
Provides a historical analogy and a clear architectural vision for AI infrastructure, emphasizing openness, modularity, and incremental improvement.
Reoriented the discussion toward system design principles, inspiring agreement on open standards and the concept of diffusion pathways for AI across regions.
Speaker: Shankar Maruwada
AI Impact Summit and AI for Agri conference can act as catalysts for South‑South knowledge exchange, turning India’s diverse challenges into global learning for other developing nations.
Positions the upcoming events as strategic platforms for international collaboration and scaling of best practices.
Shifted the focus from national implementation to global partnership, encouraging participants to think about knowledge sharing beyond India.
Speaker: Johannes Zutt
We envision 100 diffusion pathways by 2030, each created by different sectors and countries, to ensure AI delivers safe impact at scale.
Sets an ambitious, measurable global vision that extends the earlier discussion of open networks to a concrete target for worldwide AI diffusion.
Provided a forward‑looking, aspirational goal that concluded the panel on a hopeful note, reinforcing the collaborative spirit and motivating future action.
Speaker: Shankar Maruwada
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the conversation from high‑level optimism to concrete, responsible implementation. Early emphasis on trust and governance by the Chief Minister anchored the dialogue in ethics. Dr. Chaturvedi’s farmer‑ID concept supplied a tangible infrastructure backbone, while Johannes Zutt’s call for a balanced ecosystem and validation mechanisms broadened the perspective to include private innovation. Dr. Swaminathan’s focus on gender inclusion and evidence‑based rollout added depth to the equity dimension. Shankar Maruwada’s historical analogies and architectural blueprint for open, interoperable ‘digital rails’ reframed the technical approach, culminating in a bold global diffusion target. Collectively, these comments redirected the panel toward actionable, inclusive, and scalable AI strategies, shaping the session into a roadmap rather than a mere showcase of existing projects.

Follow-up Questions
How should the central‑state collaboration framework for AI in agriculture be designed and institutionalized to align with national architecture while allowing state‑level innovation?
A clear framework is needed to ensure interoperability, data trust, and scalability across India’s diverse agro‑climatic zones, enabling coordinated AI deployment at population scale.
Speaker: Vikas Chandra Rastogi (to Dr. Devesh Chaturvedi)
How can development partnerships be structured to provide just‑in‑time technical assistance that enables central and state governments to experiment, iterate, and responsibly scale AI solutions in agriculture?
Agile support mechanisms are crucial for rapid innovation cycles, ensuring that AI tools are tested, refined, and deployed responsibly without bureaucratic delays.
Speaker: Vikas Chandra Rastogi (to Johannes Zutt)
What role can platforms like the AI Impact Summit and the AI for Agri global conference play in deepening global collaboration and South‑South knowledge exchange in AI‑driven agriculture?
International forums can catalyze sharing of best practices, data, and technologies, accelerating learning across developing countries facing similar challenges.
Speaker: Vikas Chandra Rastogi (to Johannes Zutt)
How can AI‑led agricultural transformation be designed to strengthen women farmers’ agency, knowledge access, and climate resilience, and what institutional safeguards and design principles are required to ensure equity and scientific integrity?
Women constitute a large and growing share of the farming workforce; without targeted safeguards, AI could exacerbate existing gender gaps.
Speaker: Vikas Chandra Rastogi (to Dr. Soumya Swaminathan)
How should AI‑based ecosystems be standardized, integrating Digital Public Infrastructure (DPI) principles, and what architecture and governance frameworks are needed to ensure interoperability, trust, and sustainability across sectors like agriculture?
Standardization and open‑source protocols are essential for scaling AI solutions nationally and for enabling private‑sector innovations to plug into public systems safely.
Speaker: Vikas Chandra Rastogi (to Shankar Maruwada)
What research is needed to build high‑quality, robust, and diverse data sets that can support accurate AI models for pest‑disease identification, weather forecasting, and market advisories?
AI effectiveness hinges on reliable data; gaps in data coverage, especially in remote or underserved regions, limit model performance.
Speaker: Johannes Zutt
How can women farmers’ data (e.g., land ownership, crop practices) be systematically captured and integrated into AI systems to avoid exclusion and bias?
Without women’s data, AI advisories may be irrelevant or inaccessible to the majority of women farmers, perpetuating inequities.
Speaker: Dr. Soumya Swaminathan
What evaluation frameworks (akin to clinical trials) should be developed to assess AI solutions for bias, unintended risks, and effectiveness before wide deployment?
Rigorous testing ensures that AI tools are safe, equitable, and deliver the intended productivity gains without adverse side effects.
Speaker: Dr. Soumya Swaminathan
How can a ‘human‑in‑the‑loop’ approach be operationalized in AI‑driven agricultural services to maintain employment, oversight, and contextual judgment?
Combining AI with human expertise preserves livelihoods and provides a safety net against algorithmic errors.
Speaker: Dr. Soumya Swaminathan
What strategies are required to improve digital literacy and accessibility for low‑literacy, feature‑phone users so they can effectively use AI‑based advisory services?
A large share of farmers lack smartphone proficiency; voice‑based, multilingual interfaces are needed to ensure inclusive adoption.
Speaker: Johannes Zutt
Which open standards and interoperable protocols (e.g., Beacon, open APIs) should be adopted to create a networked AI ecosystem that mirrors the success of India’s DPI and railway systems?
Open, interoperable networks enable diverse stakeholders to share data and services, fostering rapid diffusion of innovations while avoiding siloed solutions.
Speaker: Shankar Maruwada
What metrics and indicators should be used to measure AI’s impact on reducing drudgery and workload for women farmers, and how can these be monitored over time?
Quantifying workload reduction helps assess whether AI interventions are delivering gender‑responsive benefits.
Speaker: Dr. Soumya Swaminathan
How can continuous feedback loops between farmers, researchers, and AI developers be institutionalized to improve model accuracy and relevance through iterative data collection?
Iterative learning ensures AI models evolve with changing agro‑climatic conditions and farmer needs, enhancing long‑term effectiveness.
Speaker: Dr. Devesh Chaturvedi
What mechanisms should be established to ensure farmer and women‑farmer representation in AI governance committees and decision‑making bodies?
Inclusive governance prevents top‑down solutions that overlook on‑the‑ground realities and promotes co‑creation.
Speaker: Dr. Soumya Swaminathan
What financing models and public‑private partnership structures are needed to attract venture capital, impact investors, and multilateral funding for scaling AI‑agri startups?
Sustainable funding is essential to move from pilot projects to large‑scale, impact‑driven deployments across the sector.
Speaker: Devendra Fadnavis

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI Without the Cost Rethinking Intelligence for a Constrained World

AI Without the Cost Rethinking Intelligence for a Constrained World

Session at a glanceSummary, keypoints, and speakers overview

Summary


The panel opened by highlighting the unsustainable surge in AI infrastructure costs, noting that most deployments rely on massive GPU clusters without sufficient attention to optimization, which forces the use of expensive, high-heat hardware that could instead be replaced by CPUs or edge devices if computational complexity were reduced [3-7][12-17][18-22]. Anshumali Shrivastava then illustrated that model parameter growth far outpaces GPU memory and compute advances, creating a “memory wall” that will make future models slower and inaccessible [95-99]; he advocated dynamic sparsity-selectively computing only needed parameters-as a long-standing mathematical technique to cut compute [102-108], and warned that the current reliance on full-matrix multiplication and mixture-of-experts is a temporary band-aid [112-119]. He argued that the next AI race will be extending context windows, which have plateaued around one million tokens despite demand for larger windows to support complex reasoning [124-133][136-143]; to break this plateau he proposed rethinking attention mathematics, showing that new CPU-friendly algorithms can outperform GPUs for very long contexts because attention scales quadratically [152-158][155-162][166-169][170-172]. Kenny Gross presented concrete energy-saving results, citing three-order-of-magnitude compute reductions and a 2,500-fold cost cut in an anomaly-detection use case, while also describing the AI-MSET system that predicts hardware failures weeks in advance, thereby avoiding costly downtime [178-182][188-194][199-203]. He emphasized that MSET can operate on lightweight CPU telemetry and has been validated across data-center, locomotive, wind-farm, and defense assets [229-236][237-244][250-254]. The discussion then turned to governance and reliability: deterministic AI was described as a framework that enforces repeatable outputs to enable auditability, addressing the hallucination problem of probabilistic LLMs [390-401][429-435]; participants agreed that domain-specific training and multi-model debate can dramatically lower hallucination rates, though at the expense of higher compute [562-572][577-586][590-597]. Finally, the panel concluded that sustainable AI hinges on reviving decades-old mathematical optimizations, reducing reliance on GPU-heavy architectures, and exploring emerging technologies such as quantum computing to further cut energy use [75-78][456-470][635-637], urging the community to adopt these methods before the environmental and economic costs become prohibitive.


Keypoints


Major discussion points


Infrastructure cost, sustainability, and the need for software-level optimization – The panel opened by stressing that AI development is “running around getting as many GPUs as possible” and that this “high heat generating, high failure rate, limited supply” infrastructure is unsustainable. They argued that reducing algorithmic complexity would allow AI to run on CPUs, edge devices, or even mobile phones, cutting power and water use and protecting the planet [3-8][12-17][20-22][31-33][75-77][78-82][456-470].


Algorithmic advances to break the GPU scaling wall – Anshumali highlighted the mismatch between the exponential growth of model parameters and the slower growth of GPU memory/compute, noting the “quadratic complexity” of attention that limits latency [92-99][102-108][110-118][124-136][144-150][152-158][164-170][174-176]. He advocated dynamic sparsity, new attention mathematics, and dramatically larger context windows as the next race to achieve higher capability without exploding hardware costs.


AI-driven prognostics (MSET/MSET) for massive compute and downtime savings – Kenny described the AI MSET system that predicts failures weeks in advance, eliminating the need for high-threshold alerts and reducing compute costs by “three orders of magnitude” and, in a concrete use-case, by 2,500×[178-184][190-199][200-203][226-236][240-254][429-444][483-508].


Governance, deterministic AI, and regulatory compliance – Kevin and Ayush explained that probabilistic LLMs produce hallucinations, which is unacceptable for production, medical, or legal domains. They proposed “deterministic AI” with repeatable outputs, strict auditability, and adherence to GDPR/DPDPI and upcoming AI-act regulations [390-404][362-381][413-421][549-558][560-568][577-586][590-598].


Enterprise adoption roadmap and practical challenges – Abhideep laid out a step-by-step process: define AI goals, map opportunities, assess data quality, choose architecture (GPU vs CPU, on-prem vs cloud), run pilots, establish governance, and then scale to production [324-344][219-224][270-284][285-286][291-293][322-329].


Overall purpose / goal


The discussion was an educational panel aimed at raising awareness of the environmental, economic, and technical challenges of the current GPU-centric AI boom, showcasing alternative algorithmic and hardware strategies (dynamic sparsity, new attention, CPU-based inference, MSET prognostics) and providing a practical framework for enterprises to adopt AI responsibly, including governance and regulatory compliance. The presenters also promoted the capabilities of the STEM Practice Company and its partners as solutions to these challenges.


Overall tone and its evolution


Opening (0-10 min): Energetic and urgent, emphasizing the “rapid adoption” of AI and the danger of “running around getting as many GPUs as possible” [3-8][10-12].


Technical deep-dives (10-30 min): More analytical and forward-looking, with detailed explanations of sparsity, context windows, and new attention math, while maintaining optimism about breakthroughs [92-176].


Solution showcase (30-45 min): Confident and demonstrative, highlighting concrete cost reductions (2,500×) and the reliability of MSET, mixed with informal humor and personal anecdotes [178-254][456-470].


Governance & policy segment (45-70 min): Shifts to a cautious, responsible tone, acknowledging risks of hallucinations, regulatory pressure, and the need for deterministic AI [390-404][362-381][549-558].


Closing (70-90 min): Returns to a collaborative, hopeful tone, encouraging audience participation, stressing the importance of sustainable AI, and ending with a call to action for further engagement [456-470][633-640].


Overall, the conversation moved from alarm and urgency to technical optimism, then to practical solution confidence, and finally to responsible, governance-focused collaboration.


Speakers

Bernie Alen – Areas of expertise: AI infrastructure cost optimization, software-level performance engineering; Role/Title: Founder & CEO of STEM Practice Company, former Advanced Technologies Market Development lead at Oracle; Affiliation: STEM Practice Company (Oracle partner) [S4][S5][S12]


Ayush Gupta – Areas of expertise: Agentic data-analysis platforms, enterprise AI integration, cost-effective inference; Role/Title: Representative of Genloop (AI solutions provider) [S6][S8]


Anshumali Shrivastava – Areas of expertise: Dynamic sparsity, attention-mechanism redesign, long-context windows, efficient AI computation; Role/Title: Professor, Rice University; member of the Super-Intelligence team for MEDA [S7][S12]


Abhideep Rastogi – Areas of expertise: AI workflow automation, enterprise AI transformation processes; Role/Title: Senior AI lead for Tata Group (U.S.-based operations) [S8][S9]


Kenny Gross – Areas of expertise: AI prognostics (MSET), energy-efficient AI, sensor-data analytics, low-compute AI solutions; Role/Title: Senior Distinguished Scientist, Oracle; noted “master machine-learning technologist” [S10][S11][S12]


Kevin Zane – Areas of expertise: Sustainability of AI, environmental impact of GPU-based infrastructure; Role/Title: (not specified in transcript) [S13]


Participant – Areas of expertise: (varied audience questions on AI governance, policy, education, etc.); Role/Title: Audience member / participant [S1][S2][S3]


Additional speakers:


(none identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

Opening & problem statement – Bernie Alen opened by warning that the AI boom is being driven by a “racing to acquire as many GPUs as possible” mentality, which creates expensive, high-heat, high-failure-rate clusters with limited supply and unsustainable environmental impact. He emphasized that the industry is overlooking software-level optimisation steps that are standard in large-scale software projects, and that restoring these optimisations could allow AI to run on CPUs, edge devices, or even mobile phones [3-8][10-12][17-22][12-16].


Company & panel introduction – Alen then introduced the STEM Practice Company, an independent Oracle partner that inherits decades of Oracle optimisation IP, and rolled-call the panel members [23-34].


Technical deep-dive (Anshumali Shrivastava)


AI-memory-wall – He showed that LLM parameter counts grow exponentially, far outpacing GPU memory and compute capacity, and because attention scales quadratically, latency will not improve on existing hardware without a breakthrough [95-99][156-159][152-155].


Dynamic sparsity – He advocated dynamic sparsity, which selects the parameters needed for each input on the fly, preserving scaling laws while cutting compute [102-108].


Mixture-of-Experts – He critiqued current reliance on Mixture-of-Experts as a temporary band-aid [112-119].


Context-window race – Current context windows have plateaued around one million tokens; the community is already experimenting with 10 million-token windows and targeting a 100 million-token window as the next milestone [124-133][136-143].


New attention formulation – He presented a re-thought attention mathematics that reduces quadratic complexity, showing a crossover point near 131 k tokens where CPUs become more efficient than GPUs; this, combined with dynamic sparsity, can deliver the same capability at a fraction of the cost [152-161][166-172].


Energy-efficient AI (Kenny Gross) – Gross reported compute reductions of three orders of magnitude, citing four NVIDIA GTC presentations [178-182], and highlighted an anomaly-detection use case where the AI-MSET method cut compute cost by a factor of 2 500 [199-203]. MSET predicts hardware failures weeks in advance using lightweight CPU telemetry, avoids high-threshold reactive alerts, and achieves the mathematically lowest possible false-alarm and missed-alarm probabilities [250-254][188-194][226-236][237-244][483-508][429-444].


Deterministic AI & compliance (Kevin Zane) – He defined deterministic AI as an architecture that binds machine-learning components within explicit rule-sets so that identical inputs always produce identical outputs, enabling auditability and eliminating hallucinations in production-critical domains [390-401][402-404]. He linked this approach to GDPR compliance and noted that it will simplify adherence to forthcoming AI-Act regulations [413-421].


Agentic data-analysis & cost model (Ayush Gupta) – Gupta described a shift from static data-warehouse pipelines to “agentic data-analysis” platforms that can query native sources (tables, PDFs, images) directly, eliminating the need for replica tables and dramatically lowering inference costs. He argued that a rupee-per-conversation model could deliver ROI comparable to a $125 k data analyst [270-284][285-286][291-293][298-304].


AI adoption roadmap (Abhideep Rastogi) – Rastogi outlined a staged adoption framework: (1) define business aim (cost reduction, revenue, CX) [327-330]; (2) map AI opportunities; (3) assess data quality and lineage; (4) select architecture (GPU vs CPU, on-prem vs hyperscaler); (5) pilot to measure accuracy and ROI; (6) establish governance (GDPR, HIPAA, AI-Act); (7) scale to production while up-skilling employees and setting clear guardrails [324-344][350-357][360-368][370-376][378-384][386-393][395-401][346-349][456-462].


Governance & policy Q&A – In the Q&A, participants asked about the enforcement date of India’s DPDP-i act and the need for industry-government coordination. The panel stressed that clear policy enforcement and collaborative standards are essential for responsible AI deployment [560-564][566-572].


Education & student AI usage – The discussion turned to AI in education. Ayush quoted “to err is human, to err more is AI,” expressing concern about students relying on AI for assignments. He and Anshumali likened AI to calculators, recommending that foundational skills be mastered before AI is used as an augmentation tool [625-632][560-562][640-645].


AGI & quantum computing – A participant asked whether AGI requires quantum computers. Bernie answered that quantum machines could eventually provide a fraction of the energy consumption of comparable GPU-simulated workloads for certain AI tasks, but this is a long-term research direction. He announced the launch of quantum-enablement centres in two sites: Chattanooga, Tennessee, and later in India [635-637][635-639].


Sustainability recap – Alen repeatedly warned that the current GPU-centric AI boom demands additional power, water, and cooling, threatening the planet unless efficiency gains are achieved [75-77][456-470].


Closing remarks – The panel concluded with consensus that (i) AI must move beyond brute-force GPU scaling by re-introducing decades-old optimisation mathematics such as dynamic sparsity and new attention formulations; (ii) these software advances are crucial for environmental sustainability; (iii) robust governance-including deterministic AI, auditability, and compliance with GDPR and emerging DPDP-i/AI-Act requirements-is needed for trustworthy high-risk deployments; and (iv) enterprises should follow a clear, staged adoption roadmap and invest in workforce up-skilling. Action items: STEM Practice Company will offer consultancy, blind bake-offs, and open-source integrations; attendees were invited to the company booth (Hall 6, Stall 100); and future work will focus on standardising governance frameworks, extending plug-and-play support for MSET, and exploring quantum-enabled AI as a long-term sustainability pathway.


Session transcriptComplete transcript of the session
Bernie Alen

Can you hear me better? Is this better? Okay. So, infrastructure cost, it’s a very important topic because everybody who is trying to create something in AI, we all know that we are running into having to use extensive infrastructure, right? And mainly it is a GPU -based infrastructure architecture. And last two, three years, I think we are not stopping to ask the questions that we would normally ask. Are we creating these applications in optimized infrastructure? We are just running around getting as many GPUs as possible because we’re all afraid that the other guy would get it and then we’ll be left out, right? So, I think it’s a very important topic. So, there has been an extremely rapid adoption of AI.

and everybody wants to have an AI answer for everything. And so we are not asking the questions that we would normally ask in any project of this scale, right? So we’re going to take a look at what are the optimization methods and good mathematics that’s existed for a long time that we should bring in to optimizing and reducing the computation needs that these models create, that these AI applications create. And if you reduce the complexity and if you reduce the computation, then you don’t need to run a lot of these things on expensive, high heat generating, high failure rate, limited supply. clustered GPUs. You can run this on CPUs. You can run this on clustered CPUs.

You can run it on edge computers. You can even run it on mobile phones and laptops. There is a software optimization step that everybody is skipping, that we would normally not skip in software development. For any large -scale software development, heavy amount of infrastructure optimization goes on. But we are not doing that in deploying these AI models. So we first want to make sure that there is enough understanding of the mechanisms and the methods that are available. And a lot of this is derived from mathematics that has existed forever. So we’re going to talk about that. I’ve got a great panel over here. By the way, just to introduce my company, we are the STEM practice company.

We are an Oracle Corporation partner company. and if you think about I think most people know about the Oracle Corporation, right? I don’t need to introduce the Oracle Corporation. If you think about Oracle, they have had to create solutions and create software and create products for very large customers. They serve the largest customers on the planet. So always they’ve had to worry about optimization, performance improvement and all that because without that infrastructure cost would just be so high. So over decades there has been a collection of intellectual property, collection of ideas, collection of methods to reduce complexity of algorithms, reduce computation and therefore create better infrastructure architectures, right? So STEM Practice Company is an independent company.

We run as an Oracle partner company, but the origins of the STEM Practice Company is within the Oracle Corporation. I led advanced technologies market development for Oracle. and then we separated as a separate company and we launched as a separate company two years ago. Now we operate as an Oracle partner company. Let me introduce the team here. This is a slide that my lawyer says I should show. So just because I paid the lawyer a lot of money to make this one slide up, I’m going to show this slide. Right? Nobody knows where we are going with AI, to be honest. Nobody knows where we are going with quantum. We’re all doing the best to predict what may come, but with any prediction, use your own logic.

That’s what my lawyer wants me to say. So I’ve said it. Okay, let’s go to the next one. So this is my panel, and you may not be able to see their names on the screen. So let me start with the gentleman from the Tata Group. We are a U .S.-based company. We launched two years ago, and we just started working. with our India operations, India opportunities, and we had the great fortune to start with a Tata company. And I think they are quite happy with what we have shown because some people say, hey, if you’re not using GPUs, not using expensive infrastructure, is there a compromise? Am I introducing more latency? Am I creating less confident output?

None of that. In fact, all of that gets better. And we were able to demonstrate that with the opportunity that we got working with Tata that we were able to show that we are getting 100 % accuracy and we have not used any GPUs at all in the infrastructure that we have proposed. Right? Okay. So that is Mr. Abhideep on the back with Tata. Say hi, wave. Okay. And the gentleman next to him is a part of a STEM practice company. He is from Oracle. I did steal him from Oracle. Oracle because it was essential to at least steal some people. before Oracle gets pissed off at me. So Kenny Gross is the senior scientist at Oracle, distinguished scientist at Oracle.

He has a patent for every day of the year. So his patent count is approaching 365. So that’s Kenny Gross, a master machine learning technologist. And next to him is a professor from Rice University, Anshu. Also serving on the super intelligence team for MEDA. And Anshu is very passionate about, as a professor, right? Professors are usually passionate. My father is a professor, right? So that’s why he lives here and I live in the U .S. I live that far away from him, okay? Just because that’s the way I can deal with this passion. But very passionate about, hey, all of these methods, these methods to do things better have existed. so let’s make sure that we are bringing those methods creating awareness for those methods and he’s going to talk about how he sees what the challenges are going to be and how we already have methods to go address the challenges that are coming up by the way this panel is going to be very interesting so all of you all can start texting two or three of your friends to start showing up here so we can spread the word more and next to him is somebody you all may already know it’s one of the top successful companies that is working with the foundations of AI that is shaping up in India it’s Ayush from Genloop and he will talk about it in a very much of an Indian context because what is he has a front row seat to everything that is going on and then the last person on the panel is this who I’m most proud of because he’s my nephew, and he is in IIT.

I went to Bitspalani, so that part I’m not proud of, that he goes to IIT and doesn’t go to Bits, right? By choice, right? So, and he is in IIT Madras, and he is working closely with us and learning deeply how to build some of these very complex AI methods up front, right? Okay, so we are a small enough team here, so always feel free to interrupt, raise your hand, come up, ask questions. The goal is to educate because I think we are going very fast, and we are spending a lot, and we are creating problems, like we need more power generation, and we need more power generation rapidly, and, because of that, we are, causing harm to the planet.

So it is good. You know, all this mathematics has existed for a long time, but it’s never been productized because there’s never been a market. But now there’s a phenomenal market for this, right? Because mathematicians are poor people, right? We have a paper and a pencil most of the time, right? But now we can productize it and we can bring these solutions to the market before we end up burning the planet. Right? Okay. So let’s go to… So these are what people are going to talk about as Professor Anshumali is going to talk about the problems that are coming up and how we have the solutions, demonstrated solutions, benchmark solutions to address the problems that are coming up.

Dr. Kenny Gross is going to talk about doing a large amount of real -time AI and… stream -based AI without using any neural networks even. Not just without using GPUs, but without using neural networks and getting a very high level of accuracy, very low rate of false warnings at a tiny fraction of the cost that everybody else is spending. Okay? And then we’ll have questions for the panel and then we’ll have questions for the audience. Right? But we have no problems in making this collaborative. So sometimes a question can’t wait till the end. So just raise your hand, ask a question, and we’ll talk about it. Okay. Now I’m going to turn it over to Professor Anshu to come here and talk about how we are already ready to address

Anshumali Shrivastava

the challenges that are coming up. Right? Thank you very much. Can you guys hear this? Okay. so I’m pretty sure you must all have heard of we need AI without the cost the cost is too much right there’s never enough GPUs how many of you have heard about solutions and how many of you have heard about yeah this is an idea that will definitely go into work or at least there is some merit to these ideas I think we are going to go into that so I think the first part that we need AI without cost is kind of obvious I’m not going to rant about it though I will talk about something that motivates why the problem that you are going to see is just going to get worse so I don’t know if you can see the plots here but what are these on the x axis is the year and on the y axis here is the parameter count of the LLMs now you see kind of two interpolated straight line the green ones is the amount of memory available in the GPUs right h hundreds a hundreds so on and so forth and the red ones are the memory or the model parameter count for the demand right that’s the models like gpt3 g chart switch transformer megatron etc what do we see here that the rate of growth of hardware and by the way it’s on logarithmic scale so it’s exponential the rate of growth of hardware is nowhere close to the rate of growth of demand the other plot that you cannot see is kind of similar but it’s in compute so it’s like in teraflops or petaflops what gpus can offer and what we need to reach to certain latency this was a famous paper from berkeley that says ai memory wall and what you should expect is if you are hoping that your latency will become better with algebraic l l m that’s not happening unless there is some breakthrough okay so models will get bigger, but they will not be able to cope up with the even the GPU growth, which means models will feel slower, the better models will feel slower and unaccessible, inachievable, right?

I mean, there are many models, I’m pretty sure you cannot even run in whatever infrastructure you have, this is going to get worse. That’s what this plot kind of says that. So clearly, there is a need for what we are talking here, right? So again, a little bit on the past work, one idea that was that is very popular. And basically, I’ve been working on it since 2016. And it’s kind of now catching up as a mainstream. One idea was why do we do full computations, let’s do sparse computation, and it’s not static sparsity, it’s dynamic sparsity. What is dynamic sparsity? Well, I need all the parameters. So I’m not throwing away, I’m not going against scaling laws.

But I will pick which ones I need based on my input or dynamically. And that is called dynamic sparsity, right? So So I’ve shown you two cartoons here. The traditional model is you do all the computation, which is what GPUs were built for, right? And the argument now is, well, you don’t do all the computation. You only do what is needed. But then GPUs are not kind of quite built for. But there is a sweet spot in between. You can do block sparsity something and get things around, which is what mixture of experts is, right? So mixture of expert is now the de facto way of training large language models. So one idea is obviously there.

But remember, the fundamental kernel of GPUs were always built for full matrix multiplication. And mixture of expert was kind of a bandage that seemed to work. But obviously, we need a lot more, as we have seen. So let’s take a pause here. We all have seen the evolution of model, right? Getting a foundation model, large parameter models with large capability. Where is this all going? What is the next race? and I want to argue here that the next phase is context window why? what is a context window? everybody is familiar with what a context window of an LLM is? it’s kind of a working memory right? so let’s say if I want to solve a simple problem like 2 plus 2 equals 2 that only requires a very simple context but let’s say I want to solve an Olympiad problem so you are asking me to prove a theorem and I generate 40 intermediate theorems I need to have all the theorem in my context to go to the next theorem otherwise if I miss any of the theorem if it goes out of context I cannot prove things so more context window means I can process more information correlate across and make decisions so complex workflows will start to happen when the context window grows and that is what we have seen with GPT’s right?

GPT 3 came up small context window as the context window is growing now we know cloud code kind of works because it has like what 200k context window or something right and even then I don’t know how many of you have experienced that you have to compactify the context because you run out of context window, right? What this plot shows is on the x axis, I don’t know if you can see it, is the ear and on the y axis is the context window. What do we see here? Almost a flat plateau after a while. And by the way, that’s also experimental. 10 million context window is experimental. The closest is 1 million. That is, you can achieve and play with it.

But it has plateaued. People are not talking about 100 million context window and more. And it is very clear to people that more complicated task means more complicated context window. And we believe this is what the next race is. At least I am very much bully on that this is what the next race is. You want to do complex automation, very complex automation, right? We talk about like building agentic workflows and all that. But I believe we are underestimating how much complex automation we want to do. And we believe that we are underestimating how much complex common sense is, right? Common sense workflows requires a lot of reasoning. and it will not happen unless we have large context windows.

But large context windows are plateauing and we are talking about some of the frontier models. So let me tell you what the current problem is. The mindset is, okay, the kernel remains the same, which is full matrix multiplication. Let’s apply bandages like mixture of expert and whatever, stretch as much as we can and see where we go. That’s strategy number one. That’s probably one strategy that we know of, seem to work, but has plateaued. We’ve seen in the previous plot, it has plateaued. What I am bully on is, we have to rethink to break that plateau. Okay. And again, like I’m not going to go very technical. This is an upcoming paper in ICLR. But I want to argue there is a new math, a new way of doing attention.

Again, I’m not going to start uttering words like sharpened softmax and like, like exponentiated all that. I’m going to start uttering words like you can read the paper it’s coming in iClear this year it’s going to be presented in Brazil in this summer but what we have shown is that if you change the math of attention then there is something which gives you the same capability but in a different cost so it’s changing the math rethinking the math like dynamic sparsity right it’s some sort of a sketched way of estimating things what is interesting is we have experimented this so if you see this plot on the x axis is the context window the y axis is the latency token the time to first tokens or token per second the two red plots are the best attention mechanism flash attention 2 flash attention 3 on the best possible hardware GH200 and the green one is actually the new math on a CPU now what is interesting is if the context window is below 131000 GPUs are obviously faster which makes sense but But as I go beyond that, the CPUs dominate.

And actually, it’s not the CPU. It’s the algorithm. And the reason is context windows scale quadratically in attention. So you can throw as much hardware as you want, but you cannot beat quadratic complexity. Right? You are throwing linear number of GPUs to tackle something that goes quadratically. So something goes like 10, 10 square, 100, 100 square, and you are just doubling things. That’s not going to work. That is what this kind of plot shows. It says something fundamental. So what we are trying, and again, this is what I argue. I’m not going to bore you with the math. But what we are trying to argue is the hope is, and remember the title of the talk, how. The how part is the rethink.

We have to rethink beyond how attention is done. Because in the current race, if you have 1 ,000 GPUs, if you have 10 ,000 GPUs, you are 10x ahead of that person. but that race is plateauing because of the quadratic complexity. So yes, you will always be ahead because you have more GPUs but not very far ahead. But if we change the math, then we can actually break that plateau and I believe we can unlock capabilities of the next level. We will see automation that hopefully we expect is possible. Again, I will say parameter count and benchmark hacking. We have seen it enough. We want to now see complex tasks happening. And it is my belief, again, I am an academic, so one of the things as an academic, you get to ask hard questions and you can think about it for a very long time.

So for me, the next race is can I break the barrier of how much complex tasks we can solve with the LLMs using this context window. And I believe if we can make progress there, that’s a very tangible real progress. so can we go to 100 million contacts faster than others I think we can and with that I would stop my

Kenny Gross

So the energy savings come from the three orders of magnitude lower compute costs. We’ve done four presentations with NVIDIA GTC conferences demonstrating with real data the reduction in compute costs. And then the other aspect that I wanted to mention in terms of data centers is the prognostic for avoiding downtime in servers and chips, CPUs and GPUs. We developed and published long ago a few dozen publications on the new AI MSET. MSET 3 is capable of detecting all mechanisms that cause CPUs and GPUs to fail. In data centers days and often weeks in advance of failure. This avoids downtime. Now in prior data centers five years ago. The downtime wasn’t a big deal because if you’re just doing web serving applications or even database applications, there’s a lot of horizontal redundancy.

With the new AI workloads, though, when a company is running a five -day training run with their LLM, system board failures are very costly for that. And one spinoff of MSET for data center applications is called electronic prognostics, where we’re able to detect all mechanisms that lead to failures of chips and system boards in the data centers. And the final point that I wanted to make with that bottom bullet there is I got data. What we always tell other industries, and MSET has been used in locomotives, wind farms, all aspects of utilities. All defense aspects, land, air, sea, and space. But if any company, whatever system that you’re using now, if you have data, historian data, we welcome doing a blind bake -off with your own data.

And whatever technique you’re using now, third -party commercial technique or homegrown technique, we’ll be happy to demonstrate with your own data in a bake -off where the winning criteria for the bake -off is lowest compute cost, earliest detection of incipient anomalies in the assets, and the lowest false alarm and missed alarm probabilities. With conventional approaches, it’s the false alarms that cause a lot of losses from shutting down assets unnecessarily, revenue -generating assets, and they’re not broken. And missed alarms can be catastrophic. And in most cases, they’re not broken. Life -critical industries. they can be extra catastrophic. So that’s an overview for our AI MSEP. I’ll turn it over now.

Bernie Alen

Okay. Thank you, Kenny. So in one of the use cases where we used the MSEP method to process anomaly detection, the cost of running the use case was 1 over 2 ,500. So it’s not just a 10x reduction or a 20x reduction. You’re talking about a reduction of 2 ,500 times, right? So that’s the power of these kinds of protocols. And so certainly before you guys start implementing… And whatever AI method you’re using, whatever solution you’re going after, educate yourself on these kinds of methods that exist. Right. Feel free to reach out to us. And not everything, you know, you need to go through a massive GPU cluster to be solved. Right. OK. So we’re going to go to the panel now and ask the panel some questions.

Questions. So I’m going to start with all the all the panelists over here. Right. And we can first talk about. Things have never been this crazy. Right. I mean, I think the last two years, two and a half years, the world has kind of gone mad in some ways. Because everybody is chasing this and everybody feels a sense of great urgency to chase this. Right. So how do you all see this? How do you all see this? I mean, I think there are a lot of challenges in AI. Maybe we’ll start with you, Abby, and then we’ll come down. closer to me.

Abhideep Rastogi

Sure. So what I’ve observed that in the recent past, if I take an example of the last two to three years, we started with the process of like Gen -H chatbot. That was a very big thing at that point. Now I can see the trend that everything is converting from Gen -H chatbot towards I would say a workflow automation where agent tech AI and agents are running on an executive level as well as on an enterprise tools where it is already executing the proper workflow which is supposed to be handled by a person or any particular code something like that. So it’s been automation which I can see in the current organization even when I talk to other clients also, they are also looking forward for these kind of things.

That’s my understanding. on this.

Bernie Alen

Kenny? Kenny, you want to comment on the same thing? What are the challenges that you’re seeing and how we are doing things now and how fast we are going? What is your prediction for what’s coming our way?

Kenny Gross

One of the early challenges with MSAT pattern recognition was getting the sensor signals out of the asset to a central location. And that challenge has been solved now for most industries and certainly for data center industries. The challenge in the early days when the two biggest locomotive manufacturers in the United States licensed MSAT, they had to bring a computer on the train to monitor the signals because there were not good techniques for offloading the signals from a locomotive. Now there are good wireless networks for bringing the sensors. And back to the data center. Thank you. we developed at Sun Microsystems computer system telemetry that picks up all the signals from all sensors and processes inside servers.

Voltages, temperatures, currents, fan speeds, in many cases vibrations are in the servers also. Thousands of variables. And we’ve made a very lightweight harness that doesn’t interfere with the customer’s compute capacity at all. It runs on the system processor, brings the telemetry out. So that challenge has been solved. And now with the latest GPU servers, there is a commercial system, Prometheus. And on December 15th, NVIDIA released freeware telemetry for all their servers and clusters. So that challenge has been solved. And we at STEM can show you how. to stream the signals from any asset, airplane engines, any asset, autonomous vehicles, into the compute box that is lightweight on CPUs, not GPUs, that gives real -time prognostics with early warning of incipient problems, not a high -low threshold.

That’s what they use now. By the time that something hits a high threshold, something is already severely wrong or the system crashed before it ever got to the threshold. We are able to detect the onset of anomalies below the noise floor. They’re in chaotic noise, and MSAT’s able to detect the onset of those. So that would be the challenge. If somebody doesn’t have sensors in their assets, they’re going to have to wait until next year’s model and put sensors in. But most assets now have lots of sensors. But do not have a good technique to… consume that data and give prognostics without having to train somebody to get a master’s degree. It works out of the box.

We hook up the sensor signals to the M -SET and get early warning enunciation of anomalies. And the energy savings is very significant because the control algorithms now have highly reliable signals going into them. M -SET’s the only technique that can disambiguate between sensor problems and problems in the assets. And so the control algorithms are using fully validated signals, and it’s much more efficient operation. And if anything starts to go wrong in the assets, you get an early warning of that.

Bernie Alen

Thank you, Kenny. Anshu, what’s your take?

Anshumali Shrivastava

So, I mean, again, as I already said, I’m very bully on long context. Let me give you an example, right? So By this time, we all know chess is easy, math is easy, programming is easy, right? I think common sense is very hard. I think common sense is very hard in short context for humans, easy in long context. So if I keep talking with you over a period of time, no matter how much I think or not, I’ll figure it out that you’re bored. It will take me some time, but I’ll figure it out. So over a long context, you need long context to figure that out. And I think machines are right now gaining context, but they are gaining it quadratically, which is what I talked about.

So I believe right now the biggest complaint in enterprises are agents do not have common sense. They hallucinate. They are not 99 % agent. They are like 50, 60 % agent. To go from 50, 60 to 99. you need that constant you are working with a human and over a period of time you figure out damn this guy needs this and that will happen when we will have really long context so I will just double down on what I just said I think it’s the next thing is efficiency and long context

Bernie Alen

very good and what do you think you are having a front row seat with everything that’s going on around here so you have not only the large context you have the relevant context too so tell me

Ayush Gupta

first of all thanks for the question and very good evening to all who have joined so we are in the space of unifying the entire data universe of an enterprise and providing an agentic data analysis platform so what that means is a normal business user who so far was used to just static dashboards can come on a system and have conversations get proactive insights and do better decision making faster so the most exciting part in that context for us is how can proactive decision making and the right quality of insights help improve in enterprises, top line, bottom line, efficiencies, etc. For instance, we have increasingly seen that the need for the warehouses, data warehouses, the big data warehouses and ATL pipelines that so far were required to be maintained will go down in future because so far everything had to come to a single source of truth table from where human analysts could actually query and get insights or power these power BI dashboards.

But now with agentic analysis, when they can connect with different data sources, different modalities, not just tables or PDFs, but also like images, presentations, documents, etc. you might not have the need to create multiple replicas copies and versions of the data set the bronze table silver tables gold tables etc you might just want to connect to those native systems of records directly and get the insights required we have seen that happening with a lot of our enterprise customers that they are able to see value when the agentic analysis is able to give their business users very good insights so that is the most exciting part for me how can we have data analysis give ROI to an enterprise and the challenge for that is exactly quality and reliability so how do you make sure those insights are of quality they are not just hey the sales are down but it is more about why are the sales down what are the next steps that you can take to fix them if you have not been able to achieve your incentives in your store or your targets in the store what is going wrong what are the other stores doing that you could learn from that and then do better and the other is a reliability of insights.

Like it’s not just getting it right 1 out of 10 times. It’s getting it right 10 out of 10 times. Even with questions that are less known or unseen and unlock value. And lastly I touched on the ROI point and that is where there is synergy with what STEM is doing. In US it’s still fine to charge roughly a dollar for one kind of an insight. If I do a rough mathematic that still comes out to be a decent enough ROI. When you are paying $125 ,000 to a data analyst for same insights. Like in case you have to hire one. But in India the cost has to come down even further. Like it has to be probably 1 rupee per conversation to actually unlock the same quality of insights.

And the major cost driver is the GPU. Like how do you have cheaper inference and that is where I’m excited about what you guys are doing at STEM. Like we are hosting our own models many a times. We are also one of the companies training SLMs to power this use case. So the exciting thing about us is can we have an alternate architecture that scales and gives us a very cheap cost of inference so that we can give the same technology at a much scalable use.

Bernie Alen

Very good. And I want to, before I go to you, I want to say that that’s why I think a lot of these solutions can be perfected in India. Because India is going to throw the toughest problems at us. We’ve got to solve these at a massive scale. India has more people than anywhere else. Everybody knows that. But India also has more mobile devices than India has people. So… And you talk about sensors. tens to hundreds of thousands of sensors all coming in from a very large population and you are telling me I need to give it to you for a rupee. Right? So India is going to throw the toughest problems at us and as we…

I saw this somewhere that it built in India but for the world. So I think if we solve them then I think we have wonderful solutions for everywhere else. Right? Okay. So what excites you? And you just got into IIT Madras and you are doing well. Thank you for that. Even though you didn’t go to bits like I asked you to. So go ahead and talk about what is exciting to you and where do you see the challenges are.

Kevin Zane

I think I’ll start with the challenges in this one. The challenge I’m going to talk about is sustainability of AI because that’s something that’s grown increasingly relevant as of late and as Anshu here said. Well, the challenge I’m going to talk about now is the sustainability of AI because, well, as Anshu here said, we’re rapidly approaching a hard limit on how scalable GPU -based infrastructure is and with the very large impact on the environment, on water and power and the amount that is required to fuel these GPU server stacks, I’m excited mostly for what STEM is doing for STEM’s ability to use better algorithms to increase AI’s efficiency, increase its speed and increase all of that without having to take up massive amounts of power, massive amounts of water and damage the planet in the process.

Bernie Alen

Very good. We are taking that water and that power and the planet from you. That’s the key point, right? Not mean we are all been around, but that’s the thing. We are taking, this is very important, right? We are taking… We are… By using this expensive infrastructure and this infrastructure that creates other high costs, like I need more power and I will generate more heat and therefore I need to be cooled down and the cooling needs more power and we need all power plants and everything will break down because none of these are very reliable systems. We need to be very careful about what we are doing to the planet in doing this so fast and believing that this is the only method that is out there.

So it’s a very high responsibility and a high burden on everyone to understand these other methods that exist. These are good mathematics so that the software can reduce the hardware requirements. That’s the sustainable method that’s out there. That’s the responsible method that’s out there. Okay. Let’s go to the next question. So it’s about process. So maybe we’ll start. I’m going to start with you there, Abby, from Tata, right? So once we know what to do, how do you take an organization through that change of going from manual processes to automation and automation of decision making, right? Which is what autonomous nature comes in and artificial intelligence comes in. And we got to address what it means for people who were so scared about job loss and everything else, right?

So what is talk about the process?

Abhideep Rastogi

So in our organization, what we do is specifically we have follow multiple stages. So if we talk about any use cases coming to us, anyone is asking that we wanted to perform certain tasks through an AI. It’s a very broad term, right? So we start with the stage zero, like what’s your aim of using AI? Is it cost reduction? Is it revenue? Or is it something that you want for customer experience? Once we finalize that, then we come into a stage one where your AI mapping to an opportunity that you have been handling that. OK, I’m interested in revenue generation, so it will be attaching to a finance department and how finance application will be useful for that.

So that’s where all stages will come into picture. Once you finalize the stage one, the next stage is what about your data? Data. That is the critical part of our journey and the transformation that where is your data is. Is your data have a quality? Is the data quality existing data lineage? And what are the sources of the data? Is it legacy data? Is it something which is cleaned or is need to be transformed into a clean data which we are looking forward into that? So that will be a big picture where the data part comes into. once you have the data all your alignment is done then next stage comes into as a period of what’s your architecture strategy under that it’s a big umbrella like first we have to under architecture you have to finalize what’s your deployment strategy are you looking for a GPU are you looking for a CPU and then what type of deployment are you planning is it on premises if it is a hyperscaler and then once you finalize the deployment then you comes into model are you looking for SLM are you looking for LLM or what other things need to be done once your stage is done then where are you going to host the model into once your architecture is finalized then your computer also will come into picture what’s your computer strategy you are looking for are you going to run on virtual CPUs or is it something that you can run in your local system also So, depend use case to use cases, right?

Once you have done that, what we prefer is to have a pilot execution where we will get to know what’s my accuracy is, what’s my ROI can be estimated and using this particular use case, how I’m going to achieve a particular target. Once this is done, your governance into coming to picture the next stage where you will be having some guardrails, and what’s your policies, if there’s any GDPR compliances are there, or if there is anything like maybe a HIPAA where your healthcare is concerned, right? Once your governance is finalized, then you are going to finalize into a platformization from a POC to a productionized to enterprise level deployment where you will be having all your sorted.

So, you have all the details which you are performing going to do. and you will be ready to go live with the AI transformation for that. So these are the stages which we usually follow. But next stage is what we internally do follow is for your employees, how you are going to learn what we did. So that is more important because in future, this will keep on coming up as a new use case or something. So you have a background so that you have a better alignment to that. So this is how we usually follow that about the transformation in our organization.

Bernie Alen

Very good. I’m going to take that and I’m going to segue into what I wanted to ask you, Ayush. He talked about governance, right? And I don’t think we have completely cracked the code. We don’t have a code on that, right? Because one of the questions we wanted to ask you was… given what you’re doing as general thinking there is. At one point, AI was synonymous to chat GPT. Outside of our technicals, if you talk to a doctor or a lawyer, they say, oh, I’m using AI. What are you using? I’m using chat GPT, right? And especially those two professions I mentioned, we’re very concerned about governance, right? So if you can talk more about that aspect of, because when you take the models to the end user, why is it not all just chat GPT?

And is it governable if you have these big, large, open source models and whatever you’re building on top of this, at the end of the day, that intelligence that’s been created, is it governable?

Ayush Gupta

So chat GPT definitely has been very instrumental in democratizing AI and has become a symbol of what AI means in the new world. So I’m going to credits to them for that. But definitely for an enterprise, a chat GP does not solve majority of the problems. It could be good for some lazy tasks like email writing or some personal plannings etc. But in an enterprise when it is about taking real decisions or even when doing some actions like I’m in the customer success team and I want to create a presentation for my customer around their usage in the last month and the issues that they had and how much time we took to kind of solve them.

This is something that cannot be done on chat GPT. For two reasons one, it does not know your enterprise data. You cannot connect all your know -hows to systems like open AI because somewhere or the other like open AI, entropics they are all tracking what kind of activity is happening on top of their APIs and then planning what would be the next expansion as an application. We’ve seen that with cursor use case like coding use case transitioning into a codex from open AI and a cloud code from entropics. So what is that you never connect your enterprise data because of the compliances, privacies and tying back to the data governance aspect of it. Second is the context.

Now what separates a steel company in US, Texas versus a steel company in another region in US or any one company from other even in the same vertical is the context is how is the culture of doing business, what are the KPIs, how are the processes set up, what actions do they take to actually doing an RCA, what are the decision making activities. That is basically the core of the business. That core of the business is not known to systems like chat GPT or clouds for two reasons again. One, they don’t know that process. The data is not explained. They don’t know how to do it. They don’t know how to do it. They don’t know how to do it.

They don’t know how to do it. Second, they are very general APIs, stateless APIs that will never be able to understand those nuances without learning. So those are the things that, you know, become the reality of enterprises and those are the things that, you know, chat GPT’s are not solving the real enterprise problem because of the context and, you know, the understanding of the business itself.

Bernie Alen

Very good. So, leading from that, Kevin, what I would ask you is what he said was, you know, a large enterprise context needs to be understood by open source models and there’s a responsible way to do that, right? You cannot just release all enterprise information to the public. But he also said that we need to have things like root cause analysis that needs to be done, which leads to deterministic AI, right? So if you talk about deterministic AI and also talk about the sovereignty aspect that he talked about. Which is that we need to create. We may be using public domain models where it makes sense, but we need to do it in such a way that the data is completely sovereign.

Go ahead. Talk about it.

Kevin Zane

See, deterministic AI is a solution to a very specific problem with most modern large language models, which is that they’re quintessentially probabilistic. You can give a chat GPT a prompt twice and you will get a different result. Chat GPT also has the capability to just make stuff up. And it is not bound to fact. It is not bound to a stringent set of rules. And the issue with that is that it’s great. It’s great when you want to generate a picture of a cat on the Eiffel Tower or write a Shakespearean ballad. But if you need to apply it in production. Production content. then hallucinations and false data is not something that you can afford to have in those kinds of situations, say cyber security or the medical field and that’s the very specific problem that we use deterministic AI to solve, right.

It’s at its core it’s an architectural response to this problem, we don’t eliminate machine learning entirely, we just bind it within a very set system and a set of rules, right. Objective isn’t like open ended generation but controlled and audible execution. So generally I would say there’s a few principles, very core principles this sort of approach, right. Your system has to be predictable as in your responses must give the same output for the same input. Right. Because that directly leads to auditability. Which is a very difficult thing to do. Maintaining intelligence.

Bernie Alen

once before. We are all playing around with creating intelligence, but truly it’s been done once before. Whatever faith you all believe in, it’s been done once before. And what were we all told? You know, you have your free will to go you know, so it’s, once you created intelligence, putting it in a box is a very, very difficult thing to do. Right? So, but then if you cannot put it in a box, how can you have a governance function? At some point it’s going to say something that’s going to embarrass your customer how can you have a governance function? Some thoughts?

Abhideep Rastogi

So there are a couple of rules we need to apply. That’s what I can think of at this point, like there are a couple of rules in terms of GDPR, DPDPI is coming into picture for India specifically and that’s where we follow those rules, that if that is compatible, we apply those. If not, then we may have to think about from the policy … on the other side of the company, right? If the word will be applicable or not. There are a couple of scenarios where your PII data is, to be very frank, it doesn’t matter in India much. But if you’re part of in US or somewhere else, it does matter. So we have to take care of those scenarios when we are implementing.

So at our organization, we have to make sure that we are following all the compatible policies, making sure all the guardrails are in place. So that’s where it’s my understanding.

Bernie Alen

Kenny, any thoughts on governance? I know that you deal with sensor data, which comes from measured things, not made up stuff for the most part, unless the sensor itself is showing some biology, right? When the sensor misbehaves, what do we call it? We call it sensor biology. You see how we blame the human race for that, but anyway. So from that point of view of governance, you live in a, I think, less complex space than people who are making user content, right? But what are your thoughts on governance?

Kenny Gross

One of the biggest challenges for governance is for applications where there is human -in -the -loop supervisory control of complex processes and systems. And this challenge, and it’s turned into this year the biggest challenge for defense AI, called situational awareness. With situational awareness, you can have a highly trained human operating a ship or an airplane, and if there are false alarms in the process. And that’s a problem. We talked about with the chat GPT, the hallucinations. And so forth. In physical systems, it’s false alarms on sensors. And I keep going back to the false alarm rate because the number of sensors, if you go from six sensors to 600 sensors, the probability of false alarms multiplies up with that to 50 ,000 sensors.

And so you have a pilot of an airplane has been highly trained for every situation. And when they test the pilots to give them their pilot’s license in the big simulators, they throw in a second problem when the pilot’s dealing with the first problem. The challenge from false alarms is you can have the most highly trained human. And if red lights are going off at different places from false alarms, the human gets to the point of cognitive overload. stupid mistakes and this is long before any hallucinations out of AI and just one example of that that I’m not talking out of school and giving away secret information the US Navy in the last five years has had three spectacular accidents in broad daylight with the latest instrumentation on on ships where they would run into a big oil barge or a fizzing fishing vessel hundred million dollar accidents and some of them are resulting in loss of lives well the human bridge watchers they’re called they’re in a sophisticated control room that if you imagine the cockpit of a 777 multiply that by a hundred you have highly trained humans watching all these signals and and if too many things are happening and if too many false alarms are happening the human gets mixed up, gets to cognitive overload.

We’ve published a half a dozen papers in the international cognitive science conferences around the world and demonstrated how MSET is able to eliminate that process for monitoring complex processes where a human has to make decisions. And the one technical point I’ll make, and this is in a lot of our journal articles, MSET has the lowest mathematically possible false alarm and missed alarm probability for

Bernie Alen

So Anshu, so we’ve collected all the requirements, right, in this conversation. Now we’re going to give them all to you to actually solve them, right? So because we’ve said that there’s a step -by -step process to doing all of this stuff, that a sensor explosion, right, that is sovereignty and RCA type requirements and there is user content and data explosion, all of this stuff. Finally, when we map it all to where is the compute to go do all this, because a lot of these algorithms are complex algorithms, right? And where is the compute to go do all this? So current methods are not taking us there, right?

Anshumali Shrivastava

So I’ll just add one thing here. Look, I mean, if you look at the progression of AI, everything is still one of the most powerful method in the humankind. It’s trial and error. Right? How do I know that prompt engineering works? I mean, if anybody has worked on prompt engineering, you keep trying and at some point of time you suddenly see it solves 80 % of the problem. That’s a good prompt and then you hill climb from there. The whole AI is about right now we are dealing with a new entity, a new species. We are trying to co -live with them and we don’t understand them. it’s not very different it’s just like my brain right sometimes it works maybe on tuesday it doesn’t work because of whatever my schedule is but i have learned over a period of time to live with it i think we are asking some very important question about governance guardrails all of that we will i think solve a lot of them with trial and error but the most important thing is trial and error should be regretless if to do a million tries i’m burning like hundreds of millions of dollars i would be careful so i will still say the biggest hurdle in the advancement is the ease at which i can trial and error and experiment with them and the ease is directly proportional to how much energy we are burning how much money we are paying imagine if compute was free imagine i give you the best model and i give you as many queries as you want and now imagine the hardest problem you are facing governance, accuracy I am pretty sure if you sit down and hill climb make 10 agents, let them talk with each other cloud bot, figure out some strategies you go on a dinner, maybe sleep overnight and these guys keep talking, all of them the most expensive model running at the highest possible latency I think you will make remarkable progress but you won’t be allowed to do that and that is why I will come back again and this is why this panel to me is very important because everything at the end of the day boils down to efficiency it’s like raising the tide because it raises all these boats all interesting problems will be solved if you are allowed enough trial and error that’s what my belief is

Bernie Alen

and that’s the thing the title of the program the title of the panel is Constrain the World right We can’t just all mint money. I’ve tried that. It doesn’t work. You know? So, and it’s a constrained world, right? So, how do we solve this problem? This is the largest conference probably ever. This is not a conference. This is AI Olympics. Okay? Largest conference ever. People are talking about like 700 ,000 people. This is the kind of scale we need to solve. Right? Think about it in the AI space. Every day in an AI space, it’s going to be this busy and this heavy and this crowded and this much of data, etc., etc. We can’t be throwing expensive infrastructure all the time to solve the problem.

We got to get better. We got to understand all of these other methods exist and implement those methods and have sustainable. Sustainable AI. Right? So, questions from the panel? Everybody? there’s enough time for all of you to ask at least one question. How about that? There’s a lot of time, so ask a question. What are some of the some, or you just have an opinion, just have an input. That is fine. Go for it. Who wants to go first?

Participant

There is this trend of, you know, AI will solve everything. It’s coming into the picture. And you talked about hallucination, and I see a lot of the engineering meaning, whether it is automotive, ship, aircraft, naval, it’s not always, solution is not always probabilistic. You know, it is also binary. You know, sensors give zero or one, so you need to decide. So, applying this in the real… lot of it in the real engineering world wherein we have to be deterministic to be safe you know you said MSAT could solve all this problem but if you could demystify MSAT for me that would be you know great

Kenny Gross

oh yes the best way to demystify MSAT in the way that it works is the conventional approach for monitoring signals from an asset let’s say from an automobile or a locomotive is to monitor put high low limits on each variable if the engine gets too hot a red light comes on the dashboard if the fan gets a bad bearing in it and it doesn’t go fast enough that will cause a problem and The coolant can get too hot. Pressures get too high. RPMs get too low. This has been the conventional approach for decades, high -low limits on thresholds. The problem that will never go away with putting high -low limits on individual signals, it’s called univariate monitoring, is when you’re monitoring noisy physics processes, if you want to get an earlier warning about a small developing problem, you reduce the thresholds.

But then spurious data values will trip the thresholds, and you’re shutting down a locomotive in the middle of Kansas. It’s got a bunch of cattle on the back, and they send the repair people. Oh, there wasn’t anything wrong with it. It’s a false alarm. And so the industries and manufacturing industries… It’s very expensive to shut down a manufacturing industry from the assets. But, oh, sorry, it was just a false alarm. And people who take their car in on a Saturday because of the red light, oh, there wasn’t really anything wrong. That’s the good news. You should be happy. It was just a false alarm. So to avoid the false alarms, they raise the thresholds. When now the system can be severely degraded before you get any alarm, and it’s in no way predictive.

So let me say. High -low thresholds are reactive. And so MSAT works fundamentally differently. It learns the patterns of correlation between and among all the signals. Some signals go up and down in unison. Some go up when others go down. It learns those patterns. And it detects an anomaly in the pattern days and often weeks before you’ll ever get near a threshold. So that’s the fundamental difference. So in. And.

Bernie Alen

do you play music? What do you play? Can you hear me? Okay. Do you play music? Huh? And you play chords? So it’s simple, right? When you’re playing chords and let’s say if you’re like me and you do a bad job at it, any untrained musician can even tell that I’m doing a bad job at it. Why? Those things need to go together. Independent notes? Maybe not. You cannot figure it out. But if you’re playing chords, anybody can say that guy sucks, right? So, same way. Understand, looking at the variations of a single variable can only take you down. But the multivariate part of MSAT where it looks at a joint number of sensors in one way, you can figure things out that are starting to go bad.

Misery loves company. Have you heard that? Similarly, anomalies don’t like to be alone. There’s always, they’re hiding amongst other anomalies. right?

Participant

Okay. Yeah. So I came in by accident but was really interested to hear what’s being discussed, especially MSET and the power of reducing the kind of compute and also translating into the CPU. It really was music to my ear. I have a question extended. What happens to the current ecosystem where plug and play and interoperability across the entire data engineering and RAG and MCPs and all is there? Is it possible to plug and play this thing? What I understand MSET is at a foundational and fundamental layer. So how does it merges with the current set of LLMs and services?

Bernie Alen

So we have to look at the problems we are trying to solve. Okay. And how do we build the correct architecture for that? The quick answer to that question is absolutely. I think we have… through sensor augmentation, we are augmenting a big field with sensors and we’ll certainly bring that and we’ll run that into techniques like the multivariate technique to come up with anomaly detection and predictive maintenance and etc. But after that, if you want to have a control system that is going and deploying the decision making, there will be other MCP based solutions that we develop. So it does all integrate. That’s the reason we have to closely look at the problem and make sure that for not all problems we are starting with and downloaded large language model.

Participant

I just have a follow up. So are there any open ecosystem where we can go and see and plug it into our current infrastructure on GenII services?

Bernie Alen

Yes, because remember, STEM practice company is an Oracle Corporation partner. So there is a lot that we do in the open source community and open integration. We can certainly spend time with you to educate you on all of that stuff. Any other questions? Hello everyone. So what is the most critical risk that policymakers, business, businesses and users are currently understanding about AI? Avi, you want to take that and then Anshu, you can go.

Abhideep Rastogi

So it depends on the use case to use case, first of all, and at this point, country to country. Like if you think about ESG. You act is one of the first act has been released across the world. Similarly for data point of view DPDPI is coming into India so we have been following up all the policies which is being implemented and we are also thinking ahead of the time that okay this AI act will be soon coming into picture in India as well as in other countries also what are the things we need to make sure that it’s being following up properly that’s what we follow as a process

Anshumali Shrivastava

I mean if I am correct the question is what is the misunderstanding that policy makers make about AI sorry

Participant

No sir it’s not a misunderstanding for the policy maker actually DPDPI act is in 2023 also but after some long time it’s not enforcement and implemented on the current situation because some IT laws, I know this is existence in our country, but they are not sufficient for the upcoming and in future and presently cyber related crimes. And totally the AI loopholes because we can’t generate the accuracy of laws with only with the IT laws especially in old IT laws. DPDP, okay, it’s enacted but what is the enforcement date and when they come?

Abhideep Rastogi

So the process basically in my understanding that it need to be forced by the government as an industry and need to come together government and industries as well as all the private entities need to come together. To make sure that is being forced upon. like if you think about I will give you an example very simple example right so if you think about iPhone chargers right it have a separate cable for lightning cable what we call but because of the EU and using by US policies maker they have mandate that it need to be C type charger right so these are the forces coming from the higher side so it need to be followed by that process but it does matter that when an organization start implementing those in a first way when government is releasing something definitely it can be followed up

Participant

hello good afternoon to all of you sir I am a master’s student in mathematics and I want to research in mathematics so as I seen there are advancement in AI so math is also integrated to machine learning so as I work on a project like the cancer detection technique 70 % are used for AI like neural network something like that so it’s research is also relevant in which direction are going on so research is also worth it in mathematics or something like that

Anshumali Shrivastava

by the way I am a math major so I think understanding math even though AI can solve math our understanding of math is very important to some extent understand AI right the closest we have come to understand AI is with formal reasoning so math is always a good background so we are doing research in fundamental understanding of what are the capabilities of LLM and reasoning about LLM and reasoning about it with your formal background is a very good research

Participant

greetings to all the panelists here I apologize I wasn’t here before so I couldn’t hear the conversation but as I can see the questions here I have one question that as far it relates with hallucinations as a legal background person I get often I just get the citations wrong and the case laws often get wrong so how far we can rely on the AI currently I know that the hallucination will evolve and the problem will be resolved eventually but at the current timeline how much can we rely on AI system and if it possible that in future that AI not like hallucinate it ever be a hallucinate free AI forever I hope I can make my question understood

Bernie Alen

okay yeah we go there, I just wanted Kevin to get that slide up. So we just, this is healthcare use case that we had the fortune of working with Tata on and we released it a few months ago and here we have 100 % accuracy. It’s not future. It’s now. We just do it differently. Right? And non -hallucinating methods are completely possible. With that, I’m going to let Ayush and Anshu address that topic as subject experts, but it’s not future. Demand for it first, because you are in a profession where you come up with some nonsense, the judge is going to throw you out of the room. Right? So you don’t have that luxury. Or in a doctor, he can end up much worse.

Right? So demand that first, but the solutions are here today. Right.

Ayush Gupta

So thanks for the question, first of all. You know, to err is human. To err more is AI. so errors can always be there now what are the scopes of errors and how to reduce them so one you should have a proper understanding of things like the system should know about your context all things there then the thinking process should be auditable what are the sub steps that have been taken that should be auditable so as a user as a responsible user I can always see what are the reasons it got to that answer maybe it made a mistake in one of his thinking processes then accuracy like it’s very difficult to have a probabilistic system be 100 % accurate but it can still be 100 % reliable so maybe it is 95 % accurate but the 5 % times it is wrong we are able to tell 100 % of the times that this is probably wrong you need to double check or you need to have the expert involved in auditing this answer so 100 % reliability is definitely achievable we just need the right processes and thinkings and validations in place to make sure we can really trust the answer because it is really critical to take actions on

Bernie Alen

So, Anshu, if you can address some of the fundamentals about why these hallucinations happen and why domain -specific training avoids that.

Anshumali Shrivastava

So, let’s think about hallucination. So, prior system were non -hallucinating system and they were like search. By the way, humans hallucinate. If I ask two people to tell about the exact same incident, making them sit in different room, they will have different explanations. Right? So, human mind is fundamentally a hallucinating mind. In fact, LLMs, when they became LLMs is because we focused on prompt completion. And prompt completion comes from psychology, where psychology is our mind has a tendency to fill. And that is how you come with prompt completion and go beyond search. So, search is non -hallucinating and LLMs has to be hallucinating because it has to be intelligent and smart. So, again. Again, like this goes back to what Bernie was saying, biology.

Right? So, if you are like humans, you are like humans. And that also becomes to the answer, how do we increase the reliability on humans? Well, you train them, right? And you rely not just on one, but a multitude committee of experts. And then you do debates and discussions. You have multiple LLMs that debate with each other, right? These are the standard way. In fact, you can also mathematically, right? We have a student who is mathematics. You can mathematically show that if I have a way to reduce the probability of something by delta, then I can run that process in a cycle and keep reducing the probability of hallucination and reach near perfect hallucination free stuff.

But again, coming back to it, you have to do a lot of LLMs. That’s a lot of cost. Barrier is again the cost. Sorry.

Bernie Alen

Wonderful. Any other question? That side of the room. Okay. Any side of the room?

Participant

Hi everyone I am working in IT company and I should be loud hello everyone am I audible now clear and loud that’s the only tone I have I don’t know how to be loud ok is it fine now yeah I can’t be more louder ok so my question is yes AI and any technology you know as and when we grow it helps majority right AI is solving a problem and it will in future is going to solve a lot of problems giving us industrial solutions speeding up our you know software solutions that we are currently working on and I think that’s the main even helping us in a variety of the areas. My question is, for the students who are in a school, right, we do have chat, GPT, HMNI, all the AI tools there.

And so it’s very easy for the students who are in a school. You know, they can do their assignments in a minute or in a few seconds. So how is it helping the students? Do we have any, I don’t know whether it’s a correct question or not, but are there any steps taken by the government or taken by the, you know, great leaders of our country all over the world? How the students’ mindset, you know, we can, are there any, you know, obligations if we are applying to our students to not use such a tool? Because it’s free, of course, and it’s available, right, over the Internet. They can do their assignments in a minute.

So it’s, I don’t know. I think as for me, it’s basically for the, you know, college students, for the industry. For the employees, it’s helping us. But how is it helping the students? Because there has been no academic changes done by, I don’t know whether the school are doing any curriculum changes in their syllabus or not. So, yeah, any thoughts on that?

Bernie Alen

So, did you guys get the question? Because it’s a profound and important and deep question. Are we screwing up the children is what she’s asking, right? By allowing them to quickly come up with anything. So, love to hear your take. Why don’t you go first?

Ayush Gupta

tasks from AI, otherwise you know, the same kind of journey we’ve had with calculators. Everyone knew how to multiply, divide, do many numbers until they started using calculator and now even for simple additions you go to the calculator. So one, it’s on personally us how much we start delegating to AI and lose touch of it. Then on second, all these educators, the pedagogies that form around the use of AI for education, the careers that start forming into it, they will themselves metamorphosize into what AI means in education space.

Anshumali Shrivastava

I mean this is a question that every university is asking and I think it’s a, as you said a profound question and I think the partial answer has already been said by Ayush, right? There are certain I mean skill set, right? If I want you to know addition, subtraction, you should not use calculators. But once you have gotten a basic feeling of that you the problem is not about using calculator the problem is what you can do with that calculator right so problem solving never goes away right you see what I am saying so imagine AI makes everybody 10x better then 10x better is the average and we will now aspire for something more so whatever is average is what AI can do you see what I am saying and going beyond that will require ingenuity creativity so I agree that education system need to transform and we are also learning as we go as to how we transform it but the goal is will always be can we solve problems that we cannot solve otherwise and that will require us to always think out of the box and so that is that will come I think it’s still an early stage but I think a lot of people are thinking about it talking about it and as I said it will start getting th

Bernie Alen

at’s a very profound question I have an 8 year old so I worry about that every day right but I hope I’m doing the right thing by letting him play with whatever AI he wants to play with but we are 10 minutes over time I’m told so I need to apologize for the next session but go ahead you had a question will you make that the last question good evening to everyone my que

Participant

stion is related to AGI because we have we are using AI right now so thinking of the next step I was thinking that is there any relation between AGI and quantum computer or is it like that AGI will be only possible after quantum computer or with the current processors

Bernie Alen

that’s a wonderful question but it’s a 2 hour topic question and I don’t know why you waited the last minute to ask that question we are launching our first quantum enablement center as a stem practice company in 2 sites in Chattanooga Tennessee and we hope that we can start and we hope that we can start launching quantum computers launching quantum computers and we hope that we can start and we hope that we can start and we hope that we can start in India as well, because we are thinking through specific problems that we can solve with quantum computers today and get it over there, but that is such a big topic, and we can’t, I think thinking about reducing competition needs, thinking about reducing cost is the way we are doing, but quantum computers don’t use a fraction of the energy compared to a similar GPU simulated machine that tries to simulate quantum processing, right?

So yes, there’s a lot of energy advantage to going that route, but that’s a very deep and very profound topic. Thanks for bringing that up, because we didn’t bring quantum up, but that’s a broad topic. With that, we have to close, because apparently we are stealing time from the next panel now, which is a difficult thing to do. I see a hand, I’ll just talk to you one -on -one outside, but thank you everybody for coming, I thank the panel. I think you guys got it and so by the way we are in we have a stall a booth whatever we call it it’s hall 6 stall 100 easy to remember 6 100 please come there and get our material and then get connected and we can keep the conversation going thank you very much thank you

Related ResourcesKnowledge base sources related to the discussion topics (34)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Bernie Alen warned that the AI boom’s focus on acquiring as many GPUs as possible creates expensive, high‑heat, high‑failure‑rate clusters with limited supply and unsustainable environmental impact.”

The knowledge base explicitly states that reliance on GPU clusters leads to high costs, excessive heat, frequent failures, and supply constraints, making the approach unsustainable [S4] and [S5].

Confirmedmedium

“GPU memory and compute capacity are constrained, and energy availability is tightening, making current AI scaling unsustainable.”

S76 notes that GPU and memory are constrained and energy is tightening, comparing the situation to early, capital‑intensive technology phases.

Confirmedmedium

“Dynamic sparsity selects the parameters needed for each input on the fly, reducing compute while preserving scaling laws.”

S46 describes dynamic sparsity as picking only the needed inputs dynamically, contrasting it with the traditional model of using all GPU compute.

!
Correctionhigh

“Current context windows have plateaued around one million tokens; the community is experimenting with 10 million‑token windows and targeting a 100 million‑token window as the next milestone.”

S96 lists actual context‑window limits for major models (e.g., up to 200 k tokens for Claude 3.5 Sonnet), showing that windows of one million tokens are not yet standard and the claimed plateau is inaccurate.

Additional Contextmedium

“Context‑window sizes are rapidly increasing, with experiments toward 10 million and 100 million tokens.”

S96 provides concrete token limits for existing models, offering a factual baseline that the reported experimental targets exceed current public capabilities.

External Sources (96)
S1
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S2
Keynote Address_Revanth Reddy_Chief Minister Telangana — -Participant: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or organizer…
S3
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S4
AI Without the Cost Rethinking Intelligence for a Constrained World — Agreed with:Bernie Alen — India’s scale and cost constraints create opportunities for developing globally applicable sol…
S5
AI Without the Cost Rethinking Intelligence for a Constrained World — – Anshumali Shrivastava- Bernie Alen – Bernie Alen- Ayush Gupta
S6
AI Without the Cost Rethinking Intelligence for a Constrained World — – Ayush Gupta- Abhideep Rastogi – Ayush Gupta- Bernie Alen
S7
S9
S10
AI Without the Cost Rethinking Intelligence for a Constrained World — He has a patent for every day of the year. So his patent count is approaching 365. So that’s Kenny Gross, a master machi…
S11
https://app.faicon.ai/ai-impact-summit-2026/ai-without-the-cost-rethinking-intelligence-for-a-constrained-world — He has a patent for every day of the year. So his patent count is approaching 365. So that’s Kenny Gross, a master machi…
S12
AI Without the Cost Rethinking Intelligence for a Constrained World — Can you hear me better? Is this better? Okay. So, infrastructure cost, it’s a very important topic because everybody who…
S14
AI’s promise comes with a heavy environmental price — Governments worldwideare racing to harness the economic potential of AI, but the technology’s environmental toll is grow…
S15
WS #283 AI Agents: Ensuring Responsible Deployment — Both speakers identify a critical gap in policymaker understanding of agentic AI, with governments asking basic definiti…
S16
How to Prevent an Anxious Generation? — However, it is important to acknowledge that not all screen time is bad. Technology can be beneficial for children when …
S17
Concerns grow over children’s use of AI chatbots — Thegrowinguse of AI chatbots and companions among children has raised safety concerns, with experts warning of inadequat…
S18
Quantum’s Black Swan — Collaboration, especially in Europe, is proposed to ensure equitable access to quantum computing. The idea of creating a…
S19
Record investment in quantum computing driven by AI growth — Funding for quantum computinghas reachedunprecedented levels, with startups in the sector securing around $1.5 billion i…
S20
Adoption of agentic AI slowed by data readiness and governance gaps — Agentic AI is emerging as a new stage ofenterprise automation, enabling systems to reason, plan, and act across workflow…
S21
Irish data authority seeks EU guidance on AI privacy under GDPR — TheIrishData Protection Commission (DPC) is awaiting guidance from the European Data Protection Board (EDPB) on handling…
S22
Driving Indias AI Future Growth Innovation and Impact — So, you know, I honestly don’t think that these are opposing forces. Agility versus security. And, you know, particularl…
S23
EU AI Act oversight and fines begin this August — A new phase of the EU AI Acttakes effect on 2 August, requiring member states to appoint oversight authorities and enfor…
S24
Shaping the Future AI Strategies for Jobs and Economic Development — Healthcare applications received particular attention, with examples from countries like Guyana demonstrating how teleme…
S25
AI Transformation in Practice_ Insights from India’s Consulting Leaders — The speakers demonstrate strong consensus on AI’s transformative potential while acknowledging significant implementatio…
S26
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Summary:The speakers demonstrate strong consensus on AI’s transformative potential while acknowledging significant imple…
S27
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S28
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — This comment addresses a fundamental tension in AI deployment – the mismatch between probabilistic AI behavior and deter…
S29
How to make AI governance fit for purpose? — AI governance must address various risks brought by AI technology, including data leakage, model hallucinations, AI acti…
S30
India’s AI Future Sovereign Infrastructure and Innovation at Scale — AI Adoption Challenges and Production Readiness: Brenno Mello presented concerning statistics that 95% of AI pilots neve…
S31
From principles to practice: Governing advanced AI in action — – **Implementation Challenges Across Jurisdictions**: Participants highlighted the tension between rapid technological a…
S32
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — These key comments fundamentally shaped the discussion by establishing three critical frameworks: (1) the technical para…
S33
AI &amp; Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — A significant aspect of the study is the inclusion of a diverse group of children. The researchers aimed to have a large…
S34
WS #376 Elevating Childrens Voices in AI Design — Children require distinct legal protections regarding AI because they lack the same capacity for informed consent as adu…
S35
WS #162 Overregulation: Balance Policy and Innovation in Technology — – Challenges of regulating AI, including privacy concerns and potential misuse (e.g. for child exploitation)
S36
Responsible AI for Children Safe Playful and Empowering Learning — Discussion point:Balancing innovation with child development Discussion point:Equity and accessibility concerns
S37
AI Without the Cost Rethinking Intelligence for a Constrained World — A participant argues that in engineering applications, solutions cannot be probabilistic but must provide clear binary d…
S38
GOVERNING AI FOR HUMANITY — As far as ‘safety’ is contextual, involving various stakeholders and cultures in creating such standards enhances their …
S39
Can we test for trust? The verification challenge in AI — Chris Painter introduced the concept of “frontier safety policies” being developed by AI companies to identify when mode…
S40
Policymaker’s Guide to International AI Safety Coordination — Evidence:Aviation safety example where determining safe distances between A380 takeoffs required extensive research, tes…
S41
AI Meets Cybersecurity Trust Governance & Global Security — Thank you. I think it’s a great question because it really forces us to reckon with something as a community that I don’…
S42
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S43
AI Meets Cybersecurity Trust Governance &amp; Global Security — “On top of that, we have the probabilistic nature of LLMs.”[1]”And the exfiltration will also happen via AI tools becaus…
S44
AI Without the Cost Rethinking Intelligence for a Constrained World — Beyond 131,000 context window, CPU-based solutions with new algorithms can outperform GPU-based systems GPU-based infra…
S45
Building Public Interest AI Catalytic Funding for Equitable Compute Access — The discussion maintained a consistently pragmatic and solution-oriented tone throughout. While acknowledging significan…
S46
https://dig.watch/event/india-ai-impact-summit-2026/ai-without-the-cost-rethinking-intelligence-for-a-constrained-world — Okay. Yeah. So I came in by accident but was really interested to hear what’s being discussed, especially MSET and the p…
S47
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 267. At the same time, human resources applications are often part of bigger enterprise resource planning software syste…
S48
WS #283 AI Agents: Ensuring Responsible Deployment — Capacity development | Online education Government Perspectives and Regulatory Approaches Need for enhanced education …
S49
Open Forum #17 AI Regulation Insights From Parliaments — Capacity building and education are essential for all stakeholders
S50
THEBROADBAND BRIDGE — In the US, a follow-up study to Smart 2020 produced by the Boston Consulting Group, Climate Group, and GeSI puts…
S51
Artificial General Intelligence and the Future of Responsible Governance — Arguments:Compute is just one element; energy, data, implementation, language, and human education are equally critical …
S52
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — “as we go from one gig to nine to ten gig … land water and power …”[30]. “defining India’s access to compute, access…
S53
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — The UN High Commissioner for Human Rights argues that AI systems should advance human rights by design, requiring alloca…
S54
Pre 12: Resilience of IoT Ecosystems: Preparing for the Future — As AI becomes integrated into IoT systems, proper governance frameworks are essential to ensure ethical and trustworthy …
S55
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Governments have collectively affirmed the importance of building trust by governing AI based on human rights, and that …
S56
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — But the trust in these systems have to be built over time, and they don’t come without some assurance being put in place…
S57
AI Without the Cost Rethinking Intelligence for a Constrained World — GPU-based infrastructure creates expensive, high heat generating, high failure rate systems with limited supply
S58
AI Without the Cost Rethinking Intelligence for a Constrained World — Bernie contends that the current reliance on GPU clusters creates multiple problems including high costs, excessive heat…
S59
Sustainable development — AI algorithms can help combine renewables through smart grids, analyse energy consumption patterns to balance electrical…
S60
https://app.faicon.ai/ai-impact-summit-2026/ai-without-the-cost-rethinking-intelligence-for-a-constrained-world — And actually, it’s not the CPU. It’s the algorithm. And the reason is context windows scale quadratically in attention. …
S61
https://dig.watch/event/india-ai-impact-summit-2026/ai-without-the-cost-rethinking-intelligence-for-a-constrained-world — So the energy savings come from the three orders of magnitude lower compute costs. We’ve done four presentations with NV…
S62
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — However, it is important to note that there is a potential risk associated with the use of such systems, as they may pro…
S63
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S64
What is it about AI that we need to regulate? — A primary concern raised across multiple sessions was that excessive regulation could stifle innovation and economic gro…
S65
OpenAI study links AI hallucinations to flawed testing incentives — OpenAI researchers say large language modelscontinue to hallucinatebecause current evaluation methods encourage them to …
S66
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — Dr. Khaneja provided insight into why proof-of-concepts fail to scale, noting that whilst organisations achieve impressi…
S67
AI Transformation in Practice_ Insights from India’s Consulting Leaders — A kind of different angle and a question to you. You know, you talked about how AI has impacted some of your work intern…
S68
Building the AI-Ready Future From Infrastructure to Skills — The progression from proof-of-concept to production represents a critical challenge. Resources like AMD’s Developer Clou…
S69
India’s AI Future Sovereign Infrastructure and Innovation at Scale — AI Adoption Challenges and Production Readiness: Brenno Mello presented concerning statistics that 95% of AI pilots neve…
S70
Practical Guide to Cloud Computing Version 3.0 — To ensure a smooth transition to cloud computing, an organization should develop an overarching cloud strategy which cre…
S71
Opening — Challenges of rapid technological change
S72
Scaling AI for Billions_ Building Digital Public Infrastructure — Unlike previous technologies that had time frames for gradual adoption and risk assessment, AI is being rapidly adopted …
S73
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Anastasiya Kozakova:I don’t know if this already answers your question, but I’m also curious to know what you think as w…
S74
IGF Leadership Panel Event — Urgency and pace of action needed
S75
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — Aloisia Wörgette: Thank you. Yes, that works. Thank you, Professor Kleinwächter. Dear colleagues, ladies and gentlemen, …
S76
Invest India Fireside Chat — The discussion maintained an optimistic yet pragmatic tone throughout. Khosla was notably direct and unfiltered in his o…
S77
Deepfakes for good or bad? — The tone was thoughtful and pragmatic throughout, balancing concern with cautious optimism. The panelists acknowledged s…
S78
Any other business /Adoption of the report/ Closure of the session — It maintains the same level of detail from the main analysis text, incorporating relevant long-tail keywords without com…
S79
Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159 — Masami Ishiyama:Thank you. This is Masami from Microsoft Japan. So I’m going to introduce the Microsoft Sustainability I…
S80
High-Level sessions: Setting the Scene – Global Supply Chain Challenges and Solutions — An exploration into trade facilitation and infrastructure modernisation uncovers concrete benefits, as seen in the case …
S81
Business Engagement Session — May Siksik: Thanks, David. Is this working? Okay. So the very first time I was really, truly exposed to interdisciplin…
S82
Strengthening the Measurement of ICT for Sustainable Development: 20 Years of Progress and New Frontiers — Ensuring that indicators reflect the ICT sector’s deepening integration across society remains a priority. During events…
S83
From summer disillusionment to autumn clarity: Ten lessons for AI — As we refocus on existing risks, some accountability is due:how and why did respected voices get carried away with AGI p…
S84
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S85
Delegated decisions, amplified risks: Charting a secure future for agentic AI — The tone was consistently critical and cautionary throughout, with Whittaker maintaining a technically informed but acce…
S86
Responsible AI for Children Safe Playful and Empowering Learning — The discussion maintained a consistently thoughtful and cautious tone throughout, with speakers demonstrating both excit…
S87
Responsible AI for Children Safe Playful and Empowering Learning — The tone was consistently thoughtful and cautious throughout, with speakers emphasizing responsibility and child welfare…
S88
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S89
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S90
Building Trusted AI at Scale – Keynote Anne Bouverot — Overall Tone:The tone is diplomatic, optimistic, and collaborative throughout. It begins with ceremonial courtesy and ap…
S91
Re-evaluating the scaling hypothesis: The AI industry’s shift towards innovative strategies — In recent years, the AI industry has heavilyinvestedin the ‘scaling hypothesis,’ which posited that by expanding data se…
S92
How nonprofits are using AI-based innovations to scale their impact — Sure. Thank you, everyone. I live in Silicon Valley and I started life as an engineer. I’ve never pursued that career. O…
S93
Global Standards for a Sustainable Digital Future — Dimitrios Kalogeropoulos, an expert in AI applications in healthcare, argued that traditional static standards are inade…
S94
Bottom-up AI and the right to be humanly imperfect (DiploFoundation) — Furthermore, the speaker expresses a negative sentiment towards outsourcing, emphasising the potential risks involved. B…
S95
Open Forum: Liberating Science — However, the analysis also reveals a growing mistrust towards experts. This trend has been observed in relation to event…
S96
Understanding the language of modern AI — As of mid-2025, context window sizes vary dramatically: ChatGPT 3.5 (free):4,096 tokens (about3,000words) ChatGPT 4o (…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
B
Bernie Alen
5 arguments152 words per minute4146 words1634 seconds
Argument 1
GPU overuse & software optimization (Bernie Alen)
EXPLANATION
Bernie warns that AI development is dominated by a rush to acquire more GPUs, neglecting software-level optimizations that could reduce hardware demand. He argues that traditional large‑scale software projects invest heavily in infrastructure optimization, a practice being ignored in AI deployments.
EVIDENCE
He notes that AI applications rely on GPU-based architectures and teams are racing to obtain as many GPUs as possible without asking if the infrastructure is optimized [3-8]. He also points out that a software optimization step, standard in large-scale software development, is being skipped when deploying AI models [17-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion highlights the dominance of GPU-based infrastructure and the unsustainable costs associated with it, echoing critiques of GPU-centric approaches in the panel [S4] and the call to move away from GPUs toward CPU/edge solutions [S5].
MAJOR DISCUSSION POINT
Infrastructure cost and optimization
AGREED WITH
Kenny Gross, Anshumali Shrivastava, Ayush Gupta
Argument 2
AI’s power/water consumption harms planet; need efficient methods (Bernie Alen)
EXPLANATION
Bernie highlights that the rapid expansion of AI workloads is driving massive power and water consumption, which threatens the environment. He calls for more efficient computational methods to mitigate these impacts.
EVIDENCE
He describes how AI’s growth creates a need for additional power generation, cooling, and water, which can damage the planet [75-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Environmental concerns about AI’s electricity and water footprints are documented in analyses of AI’s heavy environmental price [S14] and the unsustainable GPU-cluster model described in the panel [S4].
MAJOR DISCUSSION POINT
Sustainability and environmental impact
Argument 3
Education on optimization methods essential for responsible AI deployment (Bernie Alen)
EXPLANATION
Bernie stresses that understanding and applying existing mathematical optimization techniques is crucial for responsible AI use. He urges the audience to educate themselves on these methods to avoid wasteful infrastructure.
EVIDENCE
He emphasizes the need to understand mechanisms and methods derived from long-standing mathematics and to spread awareness of these optimization techniques [20-23][75-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for upskilling and broader mathematical education to enable efficient AI are reinforced by reports on workforce reskilling and capacity development in AI transformations [S25][S26].
MAJOR DISCUSSION POINT
Education on optimization methods
AGREED WITH
Abhideep Rastogi, Anshumali Shrivastava, Ayush Gupta
Argument 4
Concern for children using AI; balance needed (Bernie Alen)
EXPLANATION
Bernie expresses personal concern about his eight‑year‑old child using AI tools, questioning whether allowing unrestricted AI access is appropriate. He seeks a balanced approach to protect children while embracing technology.
EVIDENCE
He mentions his worry about his 8-year-old using AI and the need to find the right balance [631-633].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balanced perspectives on AI for children are provided by studies on AI-enabled tutoring benefits [S16] and warnings about safety and emotional risks for young users [S17].
MAJOR DISCUSSION POINT
Ethical concerns for youth
AGREED WITH
Kevin Zane, Ayush Gupta, Abhideep Rastogi, Participant
Argument 5
Quantum computing can provide an energy‑efficient alternative to GPU‑heavy AI processing
EXPLANATION
Bernie argues that quantum computers consume far less energy than GPU clusters simulating quantum processes, positioning quantum hardware as a sustainable path for future AI workloads.
EVIDENCE
He mentions the launch of a quantum enablement center, notes that quantum computers use a fraction of the energy compared with GPU-based simulations, and highlights the potential energy advantage of quantum approaches [635-637].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The potential of quantum hardware for low-energy AI is discussed in the context of collaborative quantum initiatives [S18] and record investment trends driven by AI demand [S19]; the panel also mentions a quantum enablement centre as a sustainable path [S5].
MAJOR DISCUSSION POINT
Quantum computing as a low‑energy AI solution
A
Abhideep Rastogi
4 arguments153 words per minute1187 words462 seconds
Argument 1
GPU‑free 100% accuracy demo (Abhideep Rastogi)
EXPLANATION
Abhideep reports that his team achieved full accuracy on an AI solution without employing any GPUs, demonstrating that high performance can be attained on CPU‑based infrastructure.
EVIDENCE
He states that the Tata collaboration delivered 100 % accuracy while using no GPUs at all in the proposed infrastructure [52-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A real-world demonstration achieving 100 % accuracy without GPUs is reported in the session summary [S5].
MAJOR DISCUSSION POINT
GPU‑free high‑accuracy demonstration
Argument 2
Multi‑stage AI adoption framework (aim, data, architecture, pilot, governance) (Abhideep Rastogi)
EXPLANATION
Abhideep outlines a structured, multi‑phase process for enterprises to adopt AI, starting from defining objectives, assessing data quality, choosing architecture, piloting, and establishing governance before full production rollout.
EVIDENCE
He describes stages zero through governance, covering aim definition, data quality and lineage, architecture strategy, pilot execution, and governance guardrails, culminating in platformization for production deployment [324-345].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel outlines a detailed multi-phase AI adoption roadmap, matching the stage-by-stage framework described in the discussion notes [S4].
MAJOR DISCUSSION POINT
Enterprise AI adoption roadmap
Argument 3
Compliance with GDPR/DPDPi and upcoming AI Acts (Abhideep Rastogi)
EXPLANATION
Abhideep notes that AI projects must respect data‑protection regulations such as GDPR and India’s DPDPi, and anticipate forthcoming AI‑specific legislation, ensuring that policies and guardrails are in place.
EVIDENCE
He references applying GDPR and DPDPi rules, ensuring compatible policies, and preparing for upcoming AI Acts, both in the United States and India [413-420][549-556].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulatory guidance on GDPR for AI projects and upcoming AI-specific legislation is highlighted by the Irish data authority’s request for EU guidance [S21] and the EU AI Act enforcement timeline [S23].
MAJOR DISCUSSION POINT
Regulatory compliance for AI
AGREED WITH
Kevin Zane, Ayush Gupta, Bernie Alen, Participant
Argument 4
Employee upskilling and change‑management are essential for sustainable AI transformation
EXPLANATION
Abhideep highlights that beyond technical stages, organizations must train their staff and manage cultural change to ensure AI initiatives are adopted and maintained over time.
EVIDENCE
He notes that after platformization, the organization focuses on teaching employees how to use the new AI solutions, emphasizing the importance of internal learning and change management [346-349].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of workforce reskilling and change-management in AI roll-outs is emphasized in industry surveys on AI transformation and capacity development [S25][S26].
MAJOR DISCUSSION POINT
Human‑resource readiness for AI adoption
K
Kenny Gross
6 arguments130 words per minute1616 words744 seconds
Argument 1
2,500× compute cost reduction with MSET (Kenny Gross)
EXPLANATION
Kenny claims that using the MSET (AI‑MSET) methodology can cut compute costs by a factor of 2,500 compared with conventional approaches, delivering massive efficiency gains.
EVIDENCE
He cites a specific use case where the cost of running an anomaly-detection workload was reduced to 1/2,500 of the original expense, far exceeding a simple 10× or 20× reduction [200-203].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
MSET’s ability to deliver anomaly-detection at a tiny fraction of traditional compute cost is documented in the demonstration of the technique without GPUs [S5].
MAJOR DISCUSSION POINT
Massive compute cost savings
Argument 2
Three‑order‑of‑magnitude compute savings reduce energy (Kenny Gross)
EXPLANATION
Kenny notes that the same MSET approach yields three orders of magnitude lower compute requirements, which translates directly into substantial energy savings.
EVIDENCE
He explicitly states that the energy savings stem from three orders of magnitude lower compute costs, demonstrated in multiple NVIDIA GTC presentations [178-179].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same MSET presentation notes three orders of magnitude lower compute requirements, translating into substantial energy savings [S5].
MAJOR DISCUSSION POINT
Energy efficiency through compute reduction
AGREED WITH
Bernie Alen, Kevin Zane, Anshumali Shrivastava
Argument 3
CPU algorithm beats GPU for large context (new attention math) (Kenny Gross)
EXPLANATION
Kenny presents a new attention algorithm that, when run on CPUs, outperforms GPU‑based attention for very large context windows, overcoming the quadratic scaling bottleneck of traditional GPU approaches.
EVIDENCE
He shows a plot where for context windows beyond 131 000 tokens, the CPU implementation of the new math is faster than the best GPU-based flash attention on top-tier hardware, highlighting the quadratic complexity issue [154-161].
MAJOR DISCUSSION POINT
Algorithmic breakthrough for large‑context inference
AGREED WITH
Bernie Alen, Anshumali Shrivastava, Ayush Gupta
Argument 4
Low false‑alarm rates critical for sensor‑data governance (Kenny Gross)
EXPLANATION
Kenny argues that in high‑sensor‑count environments, minimizing false alarms is essential to avoid cognitive overload and costly unnecessary shutdowns, and that MSET achieves the lowest possible false‑alarm probability.
EVIDENCE
He discusses how false-alarm rates multiply with sensor count, cites incidents in naval aviation caused by false alarms, and notes that MSET delivers mathematically minimal false-alarm and missed-alarm probabilities [429-441].
MAJOR DISCUSSION POINT
Reliability of sensor‑based monitoring
Argument 5
MSET predicts failures early, low false alarms, runs on CPUs (Kenny Gross)
EXPLANATION
Kenny describes MSET as a prognostic system that can detect incipient anomalies well before traditional high‑low thresholds trigger, operating efficiently on CPUs and avoiding downtime in data‑center and industrial assets.
EVIDENCE
He explains that MSET can forecast failures days or weeks in advance, runs on lightweight CPU harnesses, and has been applied in data centers, locomotives, wind farms, and defense systems, providing early warning without high-GPU costs [178-185][483-509].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
MSET’s predictive-maintenance capabilities-early failure warning, low false-alarm rates, and CPU-only execution-are described in the panel overview [S5].
MAJOR DISCUSSION POINT
Early‑failure prediction technology
Argument 6
Blind bake‑off benchmarking with customer data demonstrates MSET’s superiority in compute cost, detection latency and false‑alarm rates
EXPLANATION
Kenny proposes that organizations can provide their own historical data for an unbiased comparison, where MSET consistently outperforms conventional methods on key metrics.
EVIDENCE
He invites participants to a blind bake-off, specifying that the winning criteria are lowest compute cost, earliest anomaly detection, and lowest false-alarm and missed-alarm probabilities, emphasizing MSET’s advantage [188-193].
MAJOR DISCUSSION POINT
Empirical validation of MSET through open benchmarking
A
Anshumali Shrivastava
4 arguments183 words per minute3031 words991 seconds
Argument 1
Model growth outpaces hardware; context‑window plateau (Anshumali Shrivastava)
EXPLANATION
Anshumali points out that the parameter count of large language models is increasing faster than GPU memory and compute capabilities, and that context‑window sizes have plateaued, limiting future model performance.
EVIDENCE
He presents plots showing GPU memory growth lagging behind model parameter growth and notes that context-window size has flattened after reaching around 1 million tokens, with experimental attempts at 10 million still plateauing [94-99][124-135].
MAJOR DISCUSSION POINT
Scaling limits of AI models
Argument 2
Dynamic sparsity & new attention math reduce compute (Anshumali Shrivastava)
EXPLANATION
Anshumali advocates for dynamic sparsity—selectively computing only needed parameters per input—and for novel attention formulations that lower computational complexity, offering alternatives to full matrix multiplication.
EVIDENCE
He defines dynamic sparsity as picking needed parameters on the fly rather than static pruning [102-108] and later describes a new attention math that changes the algorithmic cost, enabling CPU-based performance for large contexts [152-161].
MAJOR DISCUSSION POINT
Algorithmic efficiency techniques
AGREED WITH
Bernie Alen, Kenny Gross, Ayush Gupta
Argument 3
Mathematics background vital for AI research and understanding (Anshumali Shrivastava)
EXPLANATION
Anshumali emphasizes that a strong foundation in mathematics is essential for advancing AI, as many core AI capabilities stem from formal reasoning and mathematical insights.
EVIDENCE
He mentions being a math major and argues that understanding mathematics is crucial for grasping AI fundamentals and for research into LLM capabilities and reasoning [560-562].
MAJOR DISCUSSION POINT
Importance of mathematical education
AGREED WITH
Bernie Alen, Abhideep Rastogi, Ayush Gupta
Argument 4
Lower compute costs unlock extensive trial‑and‑error experimentation, accelerating AI progress
EXPLANATION
Anshumali contends that the high expense of compute limits the number of experiments that can be run, and that reducing compute requirements would enable massive trial‑and‑error, leading to faster breakthroughs.
EVIDENCE
He describes AI development as a trial-and-error process, noting that if compute were cheap, researchers could run many more experiments, but current costs restrict this, creating a barrier to progress [448-452][466-470].
MAJOR DISCUSSION POINT
Cost‑driven limits on AI experimentation
K
Kevin Zane
3 arguments143 words per minute385 words160 seconds
Argument 1
Efficient algorithms lower energy use and avoid downtime (Kevin Zane)
EXPLANATION
Kevin stresses that more efficient AI algorithms not only cut energy consumption but also improve data‑center reliability by predicting and preventing hardware failures before they cause costly downtime.
EVIDENCE
He links AI sustainability to reduced power and water usage and cites the AI-MSET system’s ability to prognostically detect failures weeks in advance, thereby avoiding downtime in large-scale training runs [302-304][180-186].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The MSET system’s efficient algorithms are shown to cut power consumption and prevent costly data-center downtime by forecasting failures weeks in advance [S5].
MAJOR DISCUSSION POINT
Sustainability through algorithmic efficiency
AGREED WITH
Bernie Alen, Kenny Gross, Anshumali Shrivastava
Argument 2
Deterministic AI for auditability, avoids hallucinations (Kevin Zane)
EXPLANATION
Kevin proposes deterministic AI architectures that produce repeatable outputs for identical inputs, enabling auditability and eliminating the hallucination problem that plagues probabilistic LLMs.
EVIDENCE
He explains that deterministic AI binds machine learning within strict rule-sets, guaranteeing consistent responses, which supports auditability and prevents hallucinations in high-risk domains such as cybersecurity and medicine [390-404].
MAJOR DISCUSSION POINT
Governance via deterministic AI
AGREED WITH
Ayush Gupta, Abhideep Rastogi, Bernie Alen, Participant
Argument 3
Sustainable AI must address both power and water consumption, as AI workloads increase cooling and water demand
EXPLANATION
Kevin argues that AI’s environmental impact is not limited to electricity use; the massive cooling systems required for GPU clusters also consume large amounts of water, so efficiency measures should target both resources.
EVIDENCE
He links AI sustainability to the need for less power and water, stating that the rapid growth of AI workloads strains both electricity generation and water for cooling [302-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI’s environmental footprint stress the dual impact on electricity and water resources, underscoring the need for holistic sustainability measures [S14] and the unsustainable GPU-cluster model [S4].
MAJOR DISCUSSION POINT
Dual resource (energy and water) sustainability in AI
A
Ayush Gupta
4 arguments166 words per minute1364 words490 seconds
Argument 1
Enterprise needs data sovereignty; ChatGPT lacks context (Ayush Gupta)
EXPLANATION
Ayush argues that generic ChatGPT models cannot serve enterprise needs because they lack access to proprietary data and contextual business knowledge, raising sovereignty and compliance concerns.
EVIDENCE
He notes that ChatGPT does not know an enterprise’s data, cannot be connected to internal systems due to privacy and governance constraints, and therefore cannot provide the contextual insights required for decision-making [362-380].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data-governance concerns around using generic LLMs for enterprises are reflected in GDPR guidance [S21] and the broader discussion of agentic AI adoption gaps [S20].
MAJOR DISCUSSION POINT
Limitations of generic LLMs for enterprises
AGREED WITH
Kevin Zane, Abhideep Rastogi, Bernie Alen, Participant
Argument 2
Agentic data analysis replaces data warehouses; cheap inference required (Ayush Gupta)
EXPLANATION
Ayush describes a shift from traditional data‑warehouse pipelines to an agentic, conversational analytics platform that can query native data sources directly, reducing the need for costly GPU inference.
EVIDENCE
He explains that agentic analysis lets business users converse with data across tables, PDFs, images, etc., eliminating the need for multiple replicated data layers and reducing inference cost by avoiding GPUs [270-284].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift toward agentic, conversational analytics and its potential to cut GPU-intensive inference is highlighted in reports on agentic AI adoption challenges and governance needs [S20][S15].
MAJOR DISCUSSION POINT
Next‑gen data analytics architecture
Argument 3
AI tools like calculators change skill use; need pedagogical adaptation (Ayush Gupta)
EXPLANATION
Ayush compares the societal impact of calculators to that of AI tools, suggesting that education systems must adapt curricula and teaching methods to incorporate AI responsibly while preserving core skill development.
EVIDENCE
He draws a parallel between the adoption of calculators-once essential for arithmetic-and current AI tools, arguing that educators need to define how AI augments rather than replaces fundamental learning [625-632].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Comparisons to calculator adoption and calls for curriculum redesign to incorporate AI responsibly are discussed in the context of AI-enabled tutoring for children [S16] and broader education upskilling initiatives [S25].
MAJOR DISCUSSION POINT
Educational adaptation to AI
Argument 4
Agentic, conversational analytics can replace traditional data‑warehouse pipelines, simplifying architecture and cutting operational costs
EXPLANATION
Ayush claims that a conversational, agent‑driven analytics platform allows business users to query native data sources directly, eliminating the need for multiple replicated warehouse layers and reducing the cost of GPU‑intensive inference.
EVIDENCE
He explains that the shift from static dashboards and layered data warehouses to an agentic analysis platform lets users interact with tables, PDFs, images, etc., removing the need for bronze/silver/gold tables and lowering inference costs by avoiding GPUs [270-284].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The transformation of data architectures through agentic analytics, reducing layered warehouses and GPU costs, is documented in analyses of agentic AI adoption and governance [S20][S15].
MAJOR DISCUSSION POINT
Transformation of data architecture through agentic analytics
P
Participant
4 arguments138 words per minute925 words402 seconds
Argument 1
Deterministic AI is required for safety‑critical engineering applications
EXPLANATION
The participant stresses that in engineering domains such as automotive, naval or aerospace, decisions must be binary and reliable, so AI systems need to be deterministic rather than probabilistic to ensure safety.
EVIDENCE
He points out that many engineering sensors produce binary (zero or one) outputs and that deterministic methods are needed to make safe decisions, contrasting this with the probabilistic nature of current AI approaches [477-483].
MAJOR DISCUSSION POINT
Need for deterministic AI in safety‑critical contexts
Argument 2
Open, plug‑and‑play ecosystems are needed to integrate MSET with existing data‑engineering pipelines
EXPLANATION
The participant asks whether MSET can be combined with current data‑engineering, RAG and MCP services, implying that an open, interoperable ecosystem would enable easy integration and broader adoption.
EVIDENCE
He questions the possibility of a plug-and-play solution that merges MSET with the current set of LLMs and services, and later follows up asking for open ecosystems where the technology can be connected to existing infrastructure [527-534][541-546].
MAJOR DISCUSSION POINT
Interoperability and integration of new AI methods
Argument 3
Policy and governance frameworks are required to regulate AI use in education
EXPLANATION
The participant raises concerns about students using AI tools for assignments and asks whether governments or educational leaders have issued guidelines, indicating a need for clear governance to prevent misuse while leveraging AI benefits.
EVIDENCE
He inquires about steps taken by governments or leaders to manage AI use by school and college students, emphasizing the risk of rapid assignment completion without proper oversight [559-566][604-613].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy gaps for agentic AI in education are noted in governance discussions and the need for clear guidelines is echoed in AI-agents responsible-deployment workshops [S15] and child-safety studies [S16][S17].
MAJOR DISCUSSION POINT
Governance of AI in education
Argument 4
Unrestricted AI access for children may have harmful effects and requires a balanced approach
EXPLANATION
The participant expresses worry that allowing an eight‑year‑old unrestricted access to AI could be detrimental, suggesting that safeguards and balanced usage policies are necessary.
EVIDENCE
He voices personal concern about his eight-year-old child using AI tools and questions whether this is the right approach, highlighting the need for protective measures [629-634][631-633].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safety concerns about unrestricted AI use by minors are raised in child-focused AI risk reports [S17], while balanced perspectives on educational benefits are provided in AI tutoring research [S16].
MAJOR DISCUSSION POINT
Ethical concerns for youth using AI
Agreements
Agreement Points
Reducing reliance on GPUs through software and algorithmic optimization
Speakers: Bernie Alen, Kenny Gross, Anshumali Shrivastava, Ayush Gupta
GPU overuse & software optimization (Bernie Alen) CPU algorithm beats GPU for large context (new attention math) (Kenny Gross) Dynamic sparsity & new attention math reduce compute (Anshumali Shrivastava) Enterprise needs cheap inference, avoid GPUs (Ayush Gupta)
All four speakers stress that the current AI boom is dominated by a race for more GPUs, but significant compute savings can be achieved by applying software-level optimizations, novel attention algorithms that run efficiently on CPUs, and dynamic sparsity techniques, thereby enabling high-accuracy solutions without expensive GPU clusters [3-8][17-20][154-161][102-108][152-161][282-284].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with recent calls to shift AI workloads from GPU-centric, high-energy hardware to more energy-efficient CPU-based and heterogeneous compute solutions, as highlighted in discussions on constrained AI and heterogeneous compute strategies [S44][S51][S52].
Environmental sustainability through compute reduction
Speakers: Bernie Alen, Kevin Zane, Kenny Gross, Anshumali Shrivastava
AI’s power/water consumption harms planet (Bernie Alen) Efficient algorithms lower energy use and avoid downtime (Kevin Zane) Three‑order‑of‑magnitude compute savings reduce energy (Kenny Gross) Lower compute costs unlock extensive trial‑and‑error, accelerating progress while cutting energy use (Anshumali Shrivastava)
The panel repeatedly highlighted that AI’s rapid expansion is driving massive electricity and water demand, and that algorithmic efficiencies-whether through MSET’s three-order-of-magnitude compute cuts, deterministic AI, or new attention math-directly lower energy consumption and mitigate environmental impact [75-77][302-304][178-179][466-470].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on compute reduction mirrors findings that ICT-enabled energy efficiency can cut emissions by 13-22 % and broader policy pushes for greener AI through efficiency improvements and reduced carbon footprints [S50][S51][S52][S44].
Capacity development and education are essential for responsible AI deployment
Speakers: Bernie Alen, Abhideep Rastogi, Anshumali Shrivastava, Ayush Gupta
Education on optimization methods essential for responsible AI deployment (Bernie Alen) Employee upskilling and change‑management are essential for sustainable AI transformation (Abhideep Rastogi) Mathematics background vital for AI research and understanding (Anshumali Shrivastava) AI tools like calculators change skill use; pedagogical adaptation is needed (Ayush Gupta)
Multiple speakers called for upskilling and broader education-ranging from teaching classic software-optimization practices, to training staff on AI pipelines, to reinforcing mathematical foundations and adapting curricula to AI tools-to ensure that AI is deployed responsibly and efficiently [20-23][75-77][324-349][560-562][625-632].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity building and education are repeatedly identified as essential pillars for responsible AI deployment in policy briefs and multistakeholder forums, including AI Agents responsible deployment recommendations and parliamentary insights stressing education for all stakeholders [S48][S49][S45].
Robust governance and deterministic AI are needed to ensure safety, compliance and trust
Speakers: Kevin Zane, Ayush Gupta, Abhideep Rastogi, Bernie Alen, Participant
Deterministic AI for auditability, avoids hallucinations (Kevin Zane) Enterprise needs data sovereignty; ChatGPT lacks context (Ayush Gupta) Compliance with GDPR/DPDPi and upcoming AI Acts (Abhideep Rastogi) Concern for children using AI; balance needed (Bernie Alen) Deterministic AI required for safety‑critical engineering applications (Participant)
There is a shared view that AI systems must be governed by clear, deterministic rules to guarantee repeatable outputs, protect privacy and data sovereignty, and meet regulatory requirements, especially in high-risk domains and for vulnerable users such as children [390-404][362-380][413-420][549-556][631-633][477-483].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for robust governance, safety, transparency and accountability echo UN-level frameworks and AI governance initiatives that stress human-rights-by-design and frontier safety policies for trustworthy AI systems [S38][S39][S53][S55][S56].
Similar Viewpoints
Both emphasize that massive compute (and thus energy) can be cut dramatically by moving away from brute‑force GPU usage toward optimized software and algorithmic solutions [3-8][17-20][178-179].
Speakers: Bernie Alen, Kenny Gross
GPU overuse & software optimization (Bernie Alen) Three‑order‑of‑magnitude compute savings reduce energy (Kenny Gross)
Both link AI’s environmental footprint to electricity and water consumption and argue that algorithmic efficiency is the key mitigation strategy [75-77][302-304].
Speakers: Bernie Alen, Kevin Zane
AI’s power/water consumption harms planet (Bernie Alen) Efficient algorithms lower energy use and avoid downtime (Kevin Zane)
Both present novel attention‑related mathematics that enable CPU‑based inference to outperform GPU‑based methods for very large context windows, breaking the quadratic scaling bottleneck [152-161][154-161].
Speakers: Anshumali Shrivastava, Kenny Gross
Dynamic sparsity & new attention math reduce compute (Anshumali Shrivastava) CPU algorithm beats GPU for large context (new attention math) (Kenny Gross)
Both argue that avoiding GPU‑heavy inference is both feasible and necessary for cost‑effective enterprise AI deployments [282-284][3-8][17-20].
Speakers: Ayush Gupta, Bernie Alen
Enterprise needs cheap inference, avoid GPUs (Ayush Gupta) GPU overuse & software optimization (Bernie Alen)
Both stress that AI systems must be built with strong governance, auditability and regulatory compliance to be trustworthy in production environments [413-420][390-404].
Speakers: Abhideep Rastogi, Kevin Zane
Compliance with GDPR/DPDPi and upcoming AI Acts (Abhideep Rastogi) Deterministic AI for auditability, avoids hallucinations (Kevin Zane)
Unexpected Consensus
Deterministic AI for safety‑critical engineering
Speakers: Participant, Kevin Zane
Deterministic AI required for safety‑critical engineering applications (Participant) Deterministic AI for auditability, avoids hallucinations (Kevin Zane)
A non-expert participant explicitly called for deterministic AI in engineering contexts, aligning with Kevin Zane’s technical proposal for deterministic, auditable AI-a convergence not anticipated given the participant’s limited technical background [477-483][390-404].
POLICY CONTEXT (KNOWLEDGE BASE)
Safety-critical domains such as aviation and IoT have long required deterministic, binary decision making, a requirement reiterated in engineering-focused AI discussions that question probabilistic models for sensor-driven control [S37][S40][S54].
Balancing AI access for children
Speakers: Participant, Bernie Alen
Unrestricted AI access for children may have harmful effects and requires a balanced approach (Participant) Concern for children using AI; balance needed (Bernie Alen)
Both a panelist and an audience member voiced personal worries about children’s unrestricted AI use, showing an unexpected alignment of concerns across the speaker-audience divide [629-633][631-633].
POLICY CONTEXT (KNOWLEDGE BASE)
UNICEF and child-rights policy guidance, along with dedicated workshops on children’s voices in AI design, call for specialized safeguards and equitable access that respect children’s limited capacity for informed consent [S33][S34][S36][S35].
Overall Assessment

The panel displayed a strong consensus around four core themes: (1) the urgent need to cut GPU‑centric compute through software and algorithmic innovations; (2) the environmental imperative to make AI energy‑ and water‑efficient; (3) the necessity of capacity building and education to enable responsible AI use; and (4) the requirement for robust governance, including deterministic AI, to ensure safety, compliance and trust.

High consensus – most speakers independently arrived at the same conclusions, indicating a shared understanding that efficiency, sustainability, education and governance are the pillars for future AI development. This convergence suggests that forthcoming policy and industry initiatives are likely to prioritize optimization research, green AI practices, workforce upskilling, and regulatory frameworks that enforce deterministic, auditable AI systems.

Differences
Different Viewpoints
Deterministic AI versus probabilistic LLM approaches for auditability and hallucination avoidance
Speakers: Kevin Zane, Anshumali Shrivastava, Bernie Alen
Deterministic AI for auditability, avoids hallucinations Dynamic sparsity & new attention math reduce compute AI’s power/water consumption harms planet; need efficient methods
Kevin argues that AI systems must be deterministic so that identical inputs always produce identical outputs, enabling auditability and eliminating hallucinations [390-404]. Anshumali counters that hallucinations are an inherent property of current LLMs and proposes algorithmic fixes such as dynamic sparsity and new attention mathematics to reduce compute and improve performance rather than enforcing determinism [102-108][152-161]. Bernie also stresses the need for efficient methods to curb environmental impact but does not endorse determinism, focusing instead on software optimisation and alternative hardware [75-77][566-572].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors concerns about LLM hallucinations and the need for auditability, as documented in analyses of hallucination risks and the probabilistic nature of large language models [S42][S43][S37].
Approach to AI governance and compliance
Speakers: Abhideep Rastogi, Kevin Zane, Bernie Alen
Compliance with GDPR/DPDPi and upcoming AI Acts Deterministic AI for auditability, avoids hallucinations Education on optimization methods essential for responsible AI deployment
Abhideep outlines a compliance-first roadmap that aligns AI projects with GDPR, India’s DPDPi and forthcoming AI legislation [413-420]. Kevin focuses on a technical governance solution-deterministic AI-to guarantee repeatable, auditable outcomes, especially for high-risk domains [390-404]. Bernie stresses broader responsible deployment through education on optimisation and cost reduction, without detailing specific regulatory frameworks [20-23][75-77]. The three speakers share the goal of trustworthy AI but diverge on whether legal compliance, deterministic architecture, or optimisation education should be the primary governance lever.
POLICY CONTEXT (KNOWLEDGE BASE)
Divergent views on governance echo ongoing discussions about multi-stakeholder safety standards, frontier safety policies, and the need for coordinated international AI safety frameworks [S38][S39][S41][S53][S55][S56].
Preferred technical route to achieve massive compute and energy savings
Speakers: Kenny Gross, Anshumali Shrivastava
2,500× compute cost reduction with MSET CPU algorithm beats GPU for large context (new attention math) Dynamic sparsity & new attention math reduce compute
Kenny promotes the MSET methodology, claiming a 2,500-fold reduction in compute cost for anomaly-detection workloads and a CPU-based attention algorithm that outperforms GPUs for very large context windows [200-203][154-161]. Anshumali advocates dynamic sparsity and a novel attention formulation that also cuts compute, emphasizing algorithmic sparsity rather than the specific MSET framework [102-108][152-161]. Both aim to lower energy use, but they disagree on which algorithmic paradigm should be pursued as the primary solution.
POLICY CONTEXT (KNOWLEDGE BASE)
The technical route discussion reflects contrasting positions on CPU-centric, algorithmic optimizations versus continued GPU scaling, as debated in constrained AI forums and efficiency-focused policy analyses [S44][S51][S52][S50].
Unexpected Differences
Expectation of a plug‑and‑play integration of MSET with existing LLM pipelines
Speakers: Participant, Kenny Gross
Open, plug‑and‑play ecosystems are needed to integrate MSET with existing data‑engineering pipelines MSET predicts failures early, low false alarms, runs on CPUs
A participant asks whether MSET can be treated as a modular, plug-and-play component that merges with current LLM services [527-534]. Kenny explains that MSET is a foundational prognostic system that operates at a lower layer, focusing on multivariate sensor analysis rather than a simple API that can be dropped into existing pipelines [483-509]. The mismatch between a plug-and-play expectation and the deeper architectural nature of MSET was not anticipated earlier in the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholder feedback has raised questions about ecosystem impacts and integration challenges of new modules like MSET, highlighting the need for careful architectural planning [S46][S44].
Overall Assessment

The panel largely concurs on the urgency of reducing AI’s environmental footprint and on the need for trustworthy, governed AI systems. However, substantive disagreements arise around the technical route to achieve reliability (deterministic AI vs probabilistic improvements), the primary governance mechanism (legal compliance vs deterministic architecture), and the optimal algorithmic strategy for compute reduction (MSET vs dynamic sparsity/new attention). These divergences reflect differing professional backgrounds—software optimisation, hardware prognostics, and academic research—leading to varied prescriptions for the same overarching challenges.

Moderate to high. While all speakers share common high‑level goals (sustainability, trustworthiness, efficiency), they propose contrasting solutions, indicating that consensus on implementation pathways is still lacking. This fragmentation may slow coordinated policy or industry action unless a hybrid approach that integrates regulatory, deterministic, and algorithmic innovations is adopted.

Partial Agreements
Both agree that reducing compute is essential for sustainable AI and that algorithmic innovation can deliver large energy savings. However, Kenny emphasizes the MSET framework and its proven 2,500× cost cut, while Anshumali stresses dynamic sparsity and new attention math as alternative routes, leading to different implementation paths [200-203][102-108][152-161].
Speakers: Kenny Gross, Anshumali Shrivastava
2,500× compute cost reduction with MSET Dynamic sparsity & new attention math reduce compute
Both seek trustworthy AI deployments. Abhideep focuses on regulatory compliance (GDPR/DPDPi) as the backbone of governance, while Kevin proposes deterministic system design to achieve auditability. They share the end goal of reliable, accountable AI but differ on whether legal compliance or technical determinism should be the primary mechanism [413-420][390-404].
Speakers: Abhideep Rastogi, Kevin Zane
Compliance with GDPR/DPDPi and upcoming AI Acts Deterministic AI for auditability, avoids hallucinations
Both recognise that current AI deployments are unsustainable and that enterprises need solutions that respect data sovereignty and reduce resource use. Bernie calls for broader efficiency and optimisation, while Ayush points to the need for enterprise‑specific, context‑aware models that avoid reliance on generic, GPU‑heavy services. Their visions converge on sustainable, sovereign AI but diverge on the primary lever—hardware/software efficiency versus data‑centric model design [75-77][362-380].
Speakers: Bernie Alen, Ayush Gupta
AI’s power/water consumption harms planet; need efficient methods Enterprise needs data sovereignty; ChatGPT lacks context
Takeaways
Key takeaways
AI infrastructure costs are exploding due to over‑reliance on GPU clusters; software‑level optimization (dynamic sparsity, new attention math) can dramatically reduce compute needs. Current hardware growth cannot keep up with the exponential increase in model parameters and context‑window size, creating a scalability plateau. CPU‑based algorithms with novel mathematical formulations (e.g., new attention, dynamic sparsity) can outperform GPUs for very large context windows, cutting latency and energy use. Sustainability is a critical concern: reducing compute by three orders of magnitude (as shown with MSET/MSAT) cuts power, water, and cooling requirements. Deterministic AI approaches are needed for auditability, low false‑alarm rates, and to meet regulatory compliance (GDPR, DPDPi, upcoming AI Acts). Enterprise AI adoption should follow a structured, multi‑stage framework: define business aim, assess data quality, choose architecture (CPU vs GPU, on‑prem vs cloud), pilot, enforce governance, and scale. Agentic data‑analysis platforms can replace traditional data‑warehouse pipelines, but they require cheap, reliable inference—something the STEM practice team is working to provide. Education and societal impact must be considered; AI tools change skill usage (like calculators) and curricula need to adapt while preserving critical thinking. Quantum computing is being explored as a long‑term energy‑efficient alternative, but immediate gains come from algorithmic efficiency rather than new hardware.
Resolutions and action items
STEM Practice Company will offer outreach and consulting to organizations interested in CPU‑based AI, dynamic sparsity, and MSET/MSAT implementations. Panelists suggested conducting blind bake‑offs with customer data to demonstrate compute‑cost reductions and anomaly‑detection accuracy. Participants were invited to visit the STEM Practice booth (Hall 6, Stall 100) for further discussion and material sharing. A follow‑up one‑on‑one meeting was offered to address specific questions (e.g., integration of MSET with existing LLM services). The panel endorsed the multi‑stage AI adoption framework presented by Abhideep Rastogi as a best‑practice guide for enterprises.
Unresolved issues
How to standardize and enforce emerging data‑privacy regulations (GDPR, DPDPi, AI Acts) across multinational deployments. Concrete methods for achieving fully hallucination‑free AI at scale without prohibitive compute cost. Detailed integration path for MSET/MSAT with current RAG, MCP, and other LLM‑based pipelines. Timeline and practical steps for incorporating AI governance and auditability into existing legacy systems. Specific educational policies or curriculum changes needed to mitigate over‑reliance on AI tools by students. The role and timeline of quantum computing in solving AI scalability and sustainability challenges.
Suggested compromises
Use GPUs for small‑to‑moderate context windows where they remain faster, but switch to CPU‑optimized algorithms for very large contexts. Combine probabilistic LLM outputs with deterministic post‑processing layers to achieve auditability while retaining flexibility. Leverage open‑source or in‑house models for cost‑sensitive inference while keeping sensitive enterprise data sovereign. Apply dynamic sparsity and block‑sparse mixtures of experts as interim band‑aids until new attention math matures. Balance the adoption of AI in education by teaching foundational skills first (like calculators) and then using AI as an augmentation tool.
Thought Provoking Comments
We are just running around getting as many GPUs as possible because we’re all afraid that the other guy would get it and then we’ll be left out… There is a software optimization step that everybody is skipping, that we would normally not skip in software development.
Highlights a systemic oversight in the AI rush—prioritizing raw hardware over algorithmic and software efficiency—setting the stage for the discussion on cost, sustainability, and optimization.
Framed the entire panel around the need for smarter infrastructure, prompting speakers to present alternatives (CPU‑based solutions, dynamic sparsity, MSET) and steering the conversation toward optimization rather than just scaling hardware.
Speaker: Bernie Alen
The rate of growth of hardware is nowhere close to the rate of growth of demand… models will get bigger, but they will not be able to cope up with the GPU growth, which means models will feel slower, the better models will feel slower and unaccessible.
Provides a data‑driven argument that hardware scaling cannot keep pace with model scaling, introducing urgency for new algorithmic approaches.
Shifted the dialogue from merely acquiring more GPUs to exploring fundamentally different computation methods (dynamic sparsity, new attention math), leading to deeper technical discussions about context windows and quadratic complexity.
Speaker: Anshumali Shrivastava
If you change the math of attention then there is something which gives you the same capability but in a different cost… CPUs dominate beyond a context window of ~131k because the algorithm, not the hardware, is the bottleneck.
Identifies a concrete breakthrough—re‑thinking attention math—to break the quadratic scaling barrier, linking algorithmic innovation directly to hardware efficiency.
Introduced the concept that algorithmic redesign can make CPUs outperform GPUs for large contexts, prompting further questions about long‑context models and influencing later remarks on trial‑and‑error costs.
Speaker: Anshumali Shrivastava
We achieved a reduction of compute cost by 2,500 times using the MSET method for anomaly detection.
Quantifies the potential savings of the presented technology, moving the conversation from theoretical to tangible impact.
Validated the practicality of the proposed solutions, encouraging audience interest and leading to deeper discussion on sensor data, false alarms, and real‑world deployment.
Speaker: Bernie Alen (quoting Kenny Gross)
ChatGPT does not solve majority of enterprise problems because it cannot access your enterprise data due to compliance and privacy, and it lacks the specific business context needed for accurate decision‑making.
Challenges the prevailing notion that large public LLMs are sufficient for all use‑cases, emphasizing data sovereignty and contextual relevance.
Redirected the conversation toward governance, data privacy, and the need for domain‑specific or sovereign models, influencing subsequent questions about regulatory compliance.
Speaker: Ayush Gupta
Deterministic AI is an architectural response that binds machine learning within a set of rules so that responses are predictable and auditable, eliminating hallucinations for production‑critical applications.
Introduces a paradigm shift from probabilistic LLMs to rule‑bound systems for safety‑critical domains, linking technical design to governance and reliability.
Prompted a deeper exploration of auditability, false alarms, and the trade‑off between flexibility and reliability, influencing later remarks on governance and policy.
Speaker: Kevin Zane
Long context is the next race; without larger context windows we cannot achieve common‑sense reasoning or complex automation, yet context windows have plateaued.
Connects technical limitations (context window size) to higher‑level AI capabilities like common sense and agentic workflows, framing a clear future research direction.
Guided the panel to discuss the implications of context on agentic AI, influencing Ayush’s points on agentic data analysis and the need for efficient inference.
Speaker: Anshumali Shrivastava
Trial‑and‑error should be regretless; the biggest hurdle is the energy and cost required for massive experimentation, which limits progress.
Highlights a systemic bottleneck—resource‑intensive experimentation—that underpins many of the challenges discussed, tying together cost, sustainability, and research speed.
Reinforced the earlier sustainability concerns, leading to consensus on the necessity of efficiency improvements and influencing the concluding emphasis on constrained‑world solutions.
Speaker: Anshumali Shrivastava
MSET learns multivariate correlations among sensor signals, detecting anomalies weeks before thresholds are hit, unlike traditional high‑low limit monitoring which is reactive and prone to false alarms.
Explains a concrete, innovative method that addresses both reliability and cost, bridging theory with practical sensor‑based applications.
Provided a clear example of how advanced analytics can replace brute‑force hardware scaling, supporting the overarching theme of smarter, sustainable AI.
Speaker: Kenny Gross
Humans hallucinate; LLMs hallucinate because they are built for prompt completion, which inherently involves filling gaps—so hallucination is a feature of intelligence, not a bug.
Offers a philosophical perspective that reframes hallucinations as an intrinsic aspect of generative models, prompting a nuanced discussion on mitigation versus acceptance.
Shifted the tone from purely technical mitigation to a broader understanding of model behavior, influencing later dialogue on deterministic AI and reliability.
Speaker: Anshumali Shrivastava
Overall Assessment

The discussion was driven by a handful of pivotal remarks that redirected focus from the prevailing hype of scaling GPU clusters to a more nuanced exploration of algorithmic efficiency, sustainability, and governance. Bernie Alen’s opening critique of hardware‑first thinking set the agenda, while Anshumali Shrivastava’s data‑backed warnings about hardware‑model mismatch and his proposals for new attention math created a technical turning point, prompting deeper dives into long‑context challenges and cost‑effective computation. Kenny Gross’s concrete 2,500× cost‑reduction claim and explanation of MSET grounded the conversation in real‑world impact, reinforcing the need for smarter solutions. Ayush Gupta and Kevin Zane expanded the dialogue to data sovereignty, privacy, and deterministic AI, linking technical choices to regulatory and reliability concerns. Collectively, these comments shifted the panel from a surface‑level hype narrative to a focused, solution‑oriented discourse on how to build sustainable, trustworthy AI within constrained resources.

Follow-up Questions
How can enterprises govern AI models and ensure data sovereignty compared to using generic services like ChatGPT?
Ensuring compliance, privacy, and reliability of AI solutions in enterprise settings is critical for regulatory adherence and trust.
Speaker: Bernie Alen
What is deterministic AI and how does it address predictability and data sovereignty concerns?
Deterministic AI aims to provide consistent, auditable outputs essential for high‑stakes applications where hallucinations and variability are unacceptable.
Speaker: Kevin Zane
How can we efficiently compute very large context windows given the quadratic scaling of attention?
Current GPU scaling is insufficient for growing context needs; new algorithms are required to enable long‑context processing without prohibitive cost.
Speaker: Bernie Alen
Can the MSET (multivariate sensor anomaly detection) method be explained in detail?
Understanding MSET’s underlying mechanics is important for stakeholders to assess its suitability for early fault detection.
Speaker: Participant (general audience)
How does MSET integrate with existing LLM services and can it be used in a plug‑and‑play fashion?
Compatibility with current AI ecosystems determines adoption ease and broader applicability.
Speaker: Participant (general audience)
Is there an open‑source ecosystem or platform where MSET can be plugged into current infrastructure?
Availability of open tools accelerates deployment and encourages community contributions.
Speaker: Participant (general audience)
What are the most critical risks that policymakers and businesses need to understand about AI today?
Identifying key risks informs regulation, governance, and responsible AI deployment.
Speaker: Bernie Alen
When will the DPDP (Data Protection) and related AI regulations be enforced, and what are the implications for AI deployments?
Clarity on enforcement timelines is essential for compliance planning and risk management.
Speaker: Participant (general audience)
What research directions should mathematics pursue to advance AI capabilities?
Mathematical foundations can drive novel algorithms and improve AI efficiency and reliability.
Speaker: Participant (master’s student in mathematics)
How reliable are current AI systems regarding hallucinations, and can we achieve hallucination‑free AI?
Hallucinations undermine trust in critical domains; understanding limits and mitigation strategies is vital.
Speaker: Participant (legal background)
How should education systems adapt to AI tools like ChatGPT, and what policies are needed to guide student use?
Balancing AI assistance with learning outcomes requires curriculum changes and policy guidance.
Speaker: Participant (general audience)
Is there a relationship between achieving AGI and the development of quantum computers?
Exploring whether quantum computing is a prerequisite for AGI informs long‑term research investment.
Speaker: Participant (general audience)
Develop new attention mathematics to overcome quadratic complexity and enable long‑context processing on CPUs
A novel attention formulation could break current performance plateaus and reduce reliance on massive GPU clusters.
Speaker: Anshumali Shrivastava
Investigate dynamic sparsity and block sparsity techniques to reduce AI compute requirements
Selective computation can lower resource usage while maintaining model performance.
Speaker: Anshumali Shrivastava
Benchmark the new attention algorithm against FlashAttention on various hardware to assess latency and scalability
Empirical evaluation is needed to validate claimed performance benefits across context sizes.
Speaker: Anshumali Shrivastava
Conduct blind bake‑off experiments with customer data to compare AI MSET against existing solutions
Real‑world validation demonstrates the practical advantages of MSET in cost and early anomaly detection.
Speaker: Kenny Gross
Define architectural principles for deterministic AI to ensure predictability, auditability, and low false‑alarm rates
Clear guidelines are required to build AI systems suitable for regulated and safety‑critical environments.
Speaker: Kevin Zane
Create comprehensive governance frameworks covering GDPR, DPDP, and other regional regulations for AI deployments
Structured governance ensures legal compliance and builds stakeholder confidence.
Speaker: Abhideep Rastogi
Apply MSET for multivariate sensor prognostics to achieve early failure detection in diverse assets
Early detection can prevent costly downtime and improve operational efficiency across industries.
Speaker: Kenny Gross
Develop a structured, stage‑based AI adoption process from goal definition to productionization and employee training
A clear roadmap helps organizations transition smoothly and realize ROI from AI initiatives.
Speaker: Abhideep Rastogi
Explore the establishment of quantum enablement centers and assess quantum computing’s role in reducing AI energy consumption
Understanding quantum advantages could lead to more sustainable AI compute solutions.
Speaker: Bernie Alen

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI Meets Agriculture Building Food Security and Climate Resilien

AI Meets Agriculture Building Food Security and Climate Resilien

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Vikas Chandra Rastogi emphasizing that agriculture faces heightened climate risk and resource constraints, but that digital tools and AI offer a chance to secure food, nutrition and farmer incomes by embedding intelligence into public systems [9-13][14-16]. He announced Maharashtra’s Maha Agri AI Policy 2025-2029, which aims to use AI for advisory services, market information, data exchange, traceability and research, and noted the launch of Mahavistar, an AI-powered advisory network already serving over 2.5 million farmers in Marathi and tribal languages [19-24][23]. Chief Minister Devendra Fadnavis stressed that climate volatility, falling water tables and deteriorating soil make agriculture a national-security issue, and argued that AI can deliver hyper-local weather forecasts, pest alerts, precision irrigation and credit scoring, provided it rests on trusted data and ethical governance [37-44][53-56]. He described how the Maha Agri AI policy has moved from pilots to a statewide interoperable data exchange (Maha AgEx) and a traceability digital public infrastructure designed as a replicable public-good, inviting venture capital and development partners to scale these solutions [64-70][75-78].


In the panel, Dr Devesh Chaturvedi explained the creation of farmer IDs and the integrated Bharatvistar/Mahavistar platform that consolidates weather, crop, pest and scheme information in multiple languages, and outlined upcoming predictive models that use historic climate data to guide sowing and irrigation decisions [135-138][136-138]. Johannes Jett of the World Bank stressed the government’s responsibility for AI governance, interoperability and digital literacy, while noting that the private sector’s creativity can generate diverse farmer-focused applications such as a tomato-watering app, and that the Bank can provide financing and validation of these tools [154-172][176-180]. Soumya Swaminathan highlighted the need to place women farmers at the centre of AI deployment, warning that data gaps and gender-biased algorithms could exclude them, and advocated for inclusive data collection, iterative evaluation, and “human-in-the-loop” mechanisms, citing a successful Fisher-Women app as an example [205-224][247-255].


Shankar Maruwada linked the current AI push to past agricultural revolutions, arguing that open, interoperable standards-similar to India’s digital public infrastructure and railway network-are essential for scaling AI across states and sectors, and called for collaborative “open rails” that allow innovations to be shared nationally and internationally [300-307][311-322]. The participants agreed that AI should augment, not replace, extension services, and that building trust through transparent, auditable systems and inclusive governance is critical for achieving population-scale impact [55-57][236-238]. The discussion also identified the need for standardising AI ecosystems within the broader DPI framework, suggesting that open-source platforms like Sunbird can provide the architectural backbone for interoperable AI services across agriculture and other sectors [272-276]. The session concluded with an invitation to the AI4Agree 2026 conference in Mumbai, positioning the dialogue as a step toward global-south knowledge exchange and responsible AI-enabled agriculture [333-334], and reaffirmed a shared vision of leveraging AI to enhance food security, climate resilience and farmer livelihoods through coordinated policy, open data, gender-sensitive design and multi-stakeholder collaboration [84-86].


Keypoints


Major discussion points


Policy-driven scaling of AI in agriculture – Maharashtra’s “Maha Agri AI Policy 2025-2029” and the national “Bharat Vistar” platform are being rolled out from pilots to state-wide, population-scale services such as Mahavistar, which already serves over 2.5 million farmers with multilingual advisories and scheme information [19-23][24-26][57-63][64-66].


Building trustworthy, open and interoperable digital infrastructure – Speakers repeatedly stressed that AI must rest on “trusted data, ethical governance and public accountability” and be delivered through open, federated architectures (Maha AgEx, DPI, traceability DPI) that enable data exchange while protecting farmer privacy [55-57][76-78][84-86][115-133][136-138][140-141].


Ensuring gender equity and inclusion of marginalised farmers – The 2026 International Year of the Woman Farmer was highlighted as a catalyst to embed safeguards for women’s land-rights, data representation, workload reduction and participation in advisory committees; similar concerns were raised for tribal and illiterate farmers [202-210][219-226][247-255].


Multi-stakeholder collaboration and South-South knowledge sharing – The dialogue called for coordinated action among central and state governments, the World Bank, research institutes, private innovators and investors, and announced global calls for AI use-cases and the upcoming AI4Agree conference to foster cross-border learning [71-74][84-86][165-176][189-197][277-285][311-322].


Vision of moving from pilots to a national AI ecosystem – The overarching narrative was to transition from fragmented demonstrations to interoperable platforms that can deliver real-time weather, pest, market and credit advice at scale, positioning India as a laboratory for responsible AI in the Global South [57-58][83-85][112-113][128-133][158-162].


Overall purpose / goal


The session aimed to translate high-level political commitment into concrete, scalable AI solutions for Indian agriculture, while establishing a governance framework that guarantees trust, openness, and inclusivity. It sought to align state initiatives (Maha Agri AI, Mahavistar) with national architecture (Agri-Stack, DPI) and to mobilise domestic and international partners for investment, research, and South-South knowledge exchange.


Tone of the discussion


The conversation began with a formal, celebratory tone, emphasizing vision and policy milestones. As the panel progressed, the tone shifted to a more technical and problem-solving focus, highlighting challenges of data fragmentation, digital literacy, and infrastructure. Later, it became increasingly inclusive and collaborative, stressing gender equity, farmer participation, and the need for multi-sector partnerships. The session concluded on an optimistic, forward-looking note, portraying AI in agriculture as a collective, responsible endeavour capable of delivering food security and climate resilience.


Speakers

Vikas Chandra Rastogi


Role / Title: Secretary, Ministry of Agriculture and Farmers’ Welfare, Government of Maharashtra; Moderator/Host of the session and panel discussion.


Area of Expertise: Agricultural policy, AI integration in agriculture, public sector digital initiatives.


Sources: [S1], [S2]


Dr. Soumya Swaminathan


Role / Title: Chairperson, Dr. M.S. Swaminathan Research Foundation.


Area of Expertise: Agricultural science, sustainable development, women’s empowerment in agriculture, research leadership.


Sources: [S3], [S4]


Shankar Maruwada


Role / Title: Co-founder and CEO, Ekstey (XTEP) Foundation.


Area of Expertise: Digital public infrastructure, open-source platforms for large-scale AI and DPI systems, interoperability standards.


Sources: [S5], [S6], [S7]


Johannes Zutt


Role / Title: Regional Vice President, World Bank.


Area of Expertise: International development finance, AI for agriculture, global knowledge exchange.


Sources: [S8], [S9], [S10]


Devendra Fadnavis


Role / Title: Honorable Chief Minister of Maharashtra.


Area of Expertise: State-level governance, agricultural policy, AI-driven development initiatives.


Sources: [S11], [S12], [S13]


Devesh Chaturvedi


Role / Title: Secretary, Ministry of Agriculture and Farmers’ Welfare, Government of India.


Area of Expertise: National agricultural policy, Agri-Stack framework, digital agriculture mission.


Sources: [S14], [S15], [S16]


Additional speakers:


Johannes Jett – Regional Vice President, World Bank (appears in transcript; same person as Johannes Zutt).


Jonas Jett – Mentioned in the opening remarks; likely the same World Bank representative.


Shubhati Swaminathan – Mentioned in the opening roll-call; role not specified in the transcript.


Shashi Shailar ji – Referred to as “Shashi Shailarji”; likely a state minister (specific portfolio not given).


Nitesh Rane ji – Referred to as “Nitesh Raneji”; likely a state minister (specific portfolio not given).


Mr. Shankar Maruwala – Alternate spelling of Shankar Maruwada; same role as above.


No further role or expertise details are provided for the additional speakers in the transcript.


Full session reportComprehensive analysis and detailed insights

The session opened with Vikas Chandra Rastogi framing agriculture as being at a “turning point” because climate change, dwindling resources and volatile markets are increasing risk for farmers, yet he argued that digital tools and artificial intelligence (AI) present a unique opportunity to secure food, nutrition and farmer incomes by embedding intelligence into public systems [9-13][14-16]. He announced Maharashtra’s “Maha Agri AI Policy 2025-2029”, which is designed to use AI for advisory services, market information, data exchange, product traceability and research, and highlighted the launch of Mahavistar – the country’s first AI-powered advisory network now serving more than 2.5 million farmers in Marathi and, recently, the tribal language Bili [19-24][25-26]. He also noted Agristrack, another state-run platform that helps farmers access government schemes and services in a seamless manner [19-26].


Chief Minister Devendra Fadnavis then underscored that agriculture is not merely an economic sector but a matter of livelihood, social stability and national security, especially as climate volatility, falling water tables, deteriorating soil health and fragile supply chains intensify [37-44]. He explained that AI can deliver hyper-local weather forecasts, early pest warnings, precision irrigation and fertiliser guidance, credit scoring and transparent supply-chain information, but only if it rests on trusted data, ethical governance and public accountability – a point he linked to the Prime Minister’s earlier remarks [53-56][55-57]. The Maha Agri AI policy has moved beyond pilots to a statewide interoperable data-exchange platform (Maha AgEx) and a traceability digital public infrastructure (DPI) that is open, non-proprietary and intended as a replicable public-good for India and the Global South [64-70][75-78]. He called on venture capital, impact investors, multilateral development banks and philanthropic foundations to help scale these solutions, positioning Maharashtra as a laboratory for responsible AI-enabled agriculture [84-86][75-78].


In outlining the strategic direction for 2026, the Chief Minister articulated four pillars for AI 2026: (i) responsible governance – transparent, auditable and explainable AI; (ii) open and interoperable digital infrastructure; (iii) investment and scaling of technology (recognising that without capital scaling remains only a theory); and (iv) inclusion and gender equity, with 2026 declared the International Year of the Woman Farmer [84-86][75-78].


After the Chief Minister’s address, Rastogi introduced the panel, noting that it brought together national policy leaders, global development partners, scientific experts, architects of India’s AI infrastructure and innovators in digital public infrastructure, with the aim of moving from vision to implementation and preparing for the AI4Agree 2026 conference [94-106][107-108].


Dr Devesh Chaturvedi detailed the creation of a unique, consent-driven farmer-ID for each of the roughly 9 crore Indian farmers, which links to crop-sown data, land-holding information and soil-health cards, thereby providing the backbone of the Agri-Stack and enabling personalised AI advice [135-138]. He described the integrated Bharatvistar/Mahavistar platform that consolidates weather, crop, pest, market and scheme information into a single mobile service, currently available in English and Hindi and slated to support all Indian languages-including voice-based interaction for illiterate users-within the next three-to-six months [122-134]. He also highlighted a predictive model built on a century of IMD climate data that now provides one-month and one-week monsoon forecasts, which have already guided the sowing and irrigation decisions of roughly 38 million farmers [136-138].


Johannes Jett of the World Bank stressed that the government’s responsibilities include establishing AI governance, ensuring interoperability, and delivering digital-literacy programmes so that even low-skill farmers can use AI tools [154-166]. He argued that the private sector’s creativity can generate diverse farmer-focused applications – citing a Moroccan tomato-watering app that estimates water needs from a simple photo [170-176] – and that the World Bank can fund pilots and operate sandbox environments to truth-test AI-generated advisories before large-scale rollout [178-180].


Re-emphasising the governance theme, the Chief Minister reiterated the four pillars and stressed that the Maha AgEx architecture is open, federated and consent-driven, enabling diverse datasets to be combined for a “big picture” view of agriculture [64-66][84-86].


Dr Soumya Swaminathan highlighted that women farmers are often excluded because land-ownership records rarely list them, which would leave them out of data-driven AI services unless their data is deliberately incorporated [219-221]. She called for AI to reduce women’s drudgery – for example by easing millet cultivation in tribal areas – and advocated for iterative evaluation, bias audits and a “human-in-the-loop” approach akin to clinical-trial testing, stressing that technology must augment, not replace, traditional knowledge [236-238][247-255][219-229]. She cited the award-winning Fisher Friendly Mobile App and its Women Connect extension as models for gender-responsive AI solutions [236-238] and urged that women farmers be included in the governance committees that design, evaluate and iterate AI services [219-221].


Shankar Maruwada placed the current AI push in a historical context, comparing it to the Haber-Bosch breakthrough that “pulled bread out of the air” and arguing that today we are “pulling intelligence from the earth” for farmers [278-284]. He reiterated that open, interoperable standards – likened to India’s railway network – are essential for scaling AI across states and sectors, proposing “shared rails” built on open protocols such as Beacon to allow any state or private player to plug in services [300-307][311-322]. He advocated a minimum-viable-AI deployment that can be iteratively improved as data quality and usage increase, rather than waiting for a perfect solution [316-319]. His vision extends to 2030, when hundreds of diffusion pathways worldwide could deliver AI-enabled agriculture at scale, each inspired by collaborative, open-source ecosystems [328-331][332-336].


Across the discussion, participants converged on several key agreements. All three – the Chief Minister, Dr Chaturvedi and Mr Maruwada – endorsed a unified, open platform that aggregates weather, pest, market and scheme data, thereby eliminating the “digital red-tapism” of fragmented apps [53-58][122-132][300-307]. They also agreed that trustworthy, consent-driven data (farmer IDs, Agri-Stack) is the prerequisite for personalised AI advice and that ethical, transparent governance is essential for scaling [55-57][135-138][298-301]. Gender equity was unanimously framed as a core pillar, with calls for incorporating women’s land-ownership data, reducing manual labour, ensuring multilingual, low-tech access and involving women in advisory committees [76-78][219-229][146-166][267-269]. Finally, a broad coalition of public and private financing – venture capital, impact investors, multilateral banks and philanthropic foundations – was deemed necessary to move from pilots to platforms [75-78][178-180][286-287].


Nevertheless, the panel revealed moderate disagreements. While the Chief Minister insisted that strong ethical safeguards must precede large-scale rollout [55-57], Mr Maruwada argued for an iterative “minimum viable AI” approach that can be refined over time [316-319]. Dr Chaturvedi focused on consolidating services to combat digital red-tapism but gave limited detail on governance mechanisms, contrasting with the Chief Minister’s emphasis on pre-emptive governance [122-132][55-57]. On gender inclusion, the Chief Minister offered a high-level mantra of designing AI “with women” but did not specify mechanisms, whereas Dr Swaminathan demanded concrete steps to capture women’s land-ownership data, monitor workload reduction and embed women in governance committees [76-78][219-229]. Finally, a tension emerged between the World Bank’s preference for app-based delivery with future voice integration [130-134] and the call for immediate voice-first, feature-phone solutions for illiterate farmers [154-166][286-287].


The key take-aways distilled from the dialogue are: (i) AI can transform Indian agriculture by providing hyper-local advisories that augment traditional extension services; (ii) Maharashtra’s Maha Agri AI policy and the Mahavistar/Bharatvistar platforms already demonstrate population-scale impact, with multilingual and soon voice-enabled services for over 2.5 million farmers; (iii) open Digital Public Infrastructure – farmer IDs, the Agri-Stack and the Maha AgEx data-exchange – supplies the trusted, consent-driven data backbone required for personalised AI and interoperable sharing across states; (iv) responsible AI governance – transparency, auditability, explainability and a human-in-the-loop – is essential to build trust and mitigate bias; (v) gender equity must be embedded from the outset, ensuring women’s land-ownership data, reducing drudgery and providing low-tech, multilingual access; (vi) scaling will depend on coordinated investment from venture capital, impact investors, multilateral development banks and philanthropic actors, together with a clear regulatory framework; and (vii) global South-South knowledge exchange, facilitated by the AI Impact Summit and the upcoming AI4Agree 2026 conference in Mumbai, will be critical for sharing use-cases, standards and financing models [84-86][333-334].


In conclusion, the participants reaffirmed a shared vision of moving “from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution” (the Chief Minister’s declaration) and invited all stakeholders to collaborate with the governments of Maharashtra and India, with global institutions, investors, researchers and farmer organisations to ensure that AI becomes a force for food security, climate resilience and inclusive prosperity [83-86][333-334]. The session closed with an invitation to the AI4Agree 2026 conference (22-23 February 2026, Jio World Convention Centre, Mumbai) as the next step in operationalising these commitments [333-334].


Session transcriptComplete transcript of the session
Vikas Chandra Rastogi

May I invite Dr. Devish Chaturvedi, Secretary, Ministry of Agriculture and Farmers’ Welfare. Sir, please come onto the stage. Sir, please come onto the stage. Johannes Jett, Regional Vice President, World Bank. stage please. Honourable Chief Minister of Maharashtra, Shri Devendra Farnavis Ji. Honourable Minister, Shri Ashish Shailar Ji, Shri Nitesh Rane Ji. Our distinguished guests from India and around the world, very good morning. On behalf of the Government of Maharashtra, I welcome you to the session on Using AI for Food and Climate Resilience. Agriculture is at a turning point. Climate change is making farming riskier, resources are limited and markets are changing quickly. However, there is an opportunity. Digital tools and AI are advancing fast. Our goal is not just to use AI tools.

We must build intelligence into our public systems to help everyone. For India, the change is essential. It is the key to food and nutrition security, higher farmer incomes, and a stable economy. India has shown that digital systems work when they are open and well -governed. Our next step is to bring AI into this framework in a responsible way. Under the leadership of Honourable Chief Minister of Maharashtra, the state has launched the Maha Agri AI Policy 2025 -2029. This policy uses AI for farmer advisory services, market information, data exchange, product traceability, innovation and research, and creating capacities of stakeholders. Thank you. We are moving beyond pilots to project… at full scale. Mahavistar is the country’s first AI -powered network and information and advisory services.

Today, Mahavistar is being used by more than 2 .5 million farmers to get advisories in Marathi language and recently, the first tribal language in the country, Bili, has also been integrated into Mahavistar. Agristrack is helping farmers to get seamless access to various schemes and services. The Maha AgEx, which is an open, federated and consent -driven architecture for data exchange, it is helping us to bring diverse data sets together to get us a big picture. Agriculture is now a key part of India’s AI mission. We are proud to work with the Government of India to lead this change. I want to thank the Ministry of Electronics and Information Technology, Ministry of Agriculture, Extra Foundation, and the Department of Agriculture, and the Department of Agriculture, and the Department of Agriculture, and the Department of Agriculture, and the Department of Agriculture, and the Department of Agriculture, and the Department of Agriculture, and the Department of Agriculture, the World Bank, MS Swaminathan Research Foundation, the Gates Foundation, and all our partners for their support.

It is now my duty to invite our Honorable Chief Minister to the stage. He will share his vision for using AI to strengthen our food systems and protect our climate. After the address of Honorable Chief Minister, we have a panel discussion with our distinguished panelists. Welcome.

Devendra Fadnavis

A very good morning to all of you. Shri Devesh Chaturvedi, Rajesh Agarwal, Vikas Rastogi, Mr. Jonas Jett, Shubhati Swaminathan, Shushankar Maruwada, my colleagues, Shashi Shailarji, Nitesh Raneji, all the dignitaries present here. Namaskar and good morning to everyone. It is my privilege to address this distinguished gathering at the India AI Impact Summit and this important session on AI in Agriculture. We meet at a very defining moment across the world. Food systems are under strain. Climate volatility is intensifying. Water tables are falling. Soil health is deteriorating. Supply chains are fragile and global markets are unpredictable. For countries from the global south, agriculture is not merely an economic challenge. sector. It is livelihood, social stability, and national security.

India understands this very deeply. And under the visionary leadership of our Honorable Prime Minister Narendra Modi, India has placed digital public infrastructure and responsible AI at the center stage of national development. The India AI mission is about using technology to deliver inclusion, transparency, and scale. Today, agriculture must sit at the heart of this mission. Over half a billion Indians depend directly or indirectly on agriculture. Yet, smallholders face fragmented information, rising input costs, climate uncertainty, and limited access to credit and market. Traditional extension systems, however committed, cannot match the scale and the speed required. Artificial intelligence changes this equation. AI can provide hyperlocal weather predictions, early pest outbreaks, warnings, precision irrigation and fertilizer guidance, credit scoring based on crop intelligence, transparent traceable supply chains, real -time market advisories.

But let me emphasize, AI is not a magic. As Honorable PM said in his inaugural session, AI must be built on trusted data, ethical governance. And public accountability. Without trust, scale will not happen. Last year, Maharashtra made a very clear and decisive strategic decision AI in agriculture must not remain confined to demonstrations or pilots It must reach millions Under our Maha Agri AI policy 2025 -29 We adopted a policy -led ecosystem -driven model Built on openness and interoperability Allow me to share what this has meant in practice As rightly told by our Secretary Maha Vistar Our AI -powered mobile platform delivers multilingual personalized advisories Market intelligence, pest alerts and access to government services More than 2 .5 million downloads Acting as a platform for AI -powered mobile platform The Maha Agri AI is a platform for AI -powered mobile platform The Maha Agri AI is a platform for AI -powered mobile platform digital friend to all these farmers.

This demonstrates one thing very clearly. Farmers are ready for AI. When AI is designed for them, AI -based pest surveillance, crop sap integration is our mantra. By integrating geospatial analytics with post -surveillance, we have delivered early warnings to cotton -growing farmers, reducing crop vulnerability and finance risk. This is predictive governance in action. Agriculture data exchange is also one thing which is defining this step. We are building a statewide interoperable agriculture data exchange. We are building a statewide interoperable agriculture data exchange. based on open standards and strong data governance. Data must empower farmers, not exploit them. Traceability digital public infrastructure in today’s global markets, the transparency is a mantra. We are unveiling a blueprint for a traceability DPI that will ensure end -to -end visibility across value chains, enhancing food safety, export competitiveness, and consumer trust.

And this is not proprietary. It is being designed as a replicable public infrastructure model for India and the entire global south. In partnership with India AI, by mission, the Government of Maharashtra the World Bank, and the Wadhani AI, we launched a global call for AI use cases in agriculture. The resulting compendium of real -world AI applications in agriculture was released in Delhi on 17 February 2026. This compendium documents successful AI deployments from Africa, Asia, Latin America, and beyond. India is convening global knowledge for the benefit of the global south. As we move towards AI for Agri 2026 in Mumbai, our vision rests on four pillars. Responsible governance. AI must be transparent, auditable, and explainable. Open and interoperable digital infrastructure.

innovation cannot scale in silos investment and scaling technology without capital remains just a theory and inclusion and gender equity is also a mantra 2026 is the international year of women in agriculture AI solutions must be designed with women farmers not merely for them Maharashtra today presents one of the most compelling agri -innovation ecosystems globally 150 lakh hectares of cultivated land diverse agro -climatic conditions leading agriculture universities and AI research centers a vibrant startup ecosystem a clear regulatory framework and single window facilities a vision for investors and a vision for the future We invite venture capital funds, impact investors, multilateral development banks, corporate innovation arms, and philanthropic foundations to partner with us. And in this partnership, we envisage scaling AI advisory platforms, co -developing traceability DPI modules, investing in agri -tech startups, supporting digital literacy, especially among women farmers, building capacity in the rural AI ecosystems.

When you invest in Maharashtra, you invest. In scalable solutions for engaging economies worldwide, food security, climate resilience, and AI governance are deeply connected. that master AI -enabled agriculture will secure farmer incomes and strategic stability. India has the scale, DPI, and democratic governance model to demonstrate how AI can be deployed responsibly at population scale. Maharashtra is proud to be laboratory of that ambition. Friends, this satellite session is a declaration. We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution, from intention to investment. The government of Maharashtra stands ready to collaborate with the government of India, with states, with global institutions, investors, researchers, and farmer organizations. Let us ensure that AI becomes a force.

for food security, climate

Vikas Chandra Rastogi

Thank you, sir, for your visionary address. You always continue to inspire us to aim higher and achieve better. And under your leadership, I can assure you the Agriculture Department will rise to the challenge and serve the aspirations of more than 15 million farmers of the state of Maharashtra. Thank you so much, sir. We will now start the panel discussion. in a few moments. Thank you. Thank you. Thank you. Thank you. this session. We are fortunate to have with us a distinguished panel representing national policy leadership, global development, scientific expertise, national AI architecture, and digital public infrastructure innovation. Let me introduce the panelists once again. Dr. Devesh Chaturvedi, he is the Secretary, Ministry of Agriculture and Farmer Welfare.

Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice President, World Bank. Mr. Jett brings a vital global perspective on development and finance from the World Bank. Ms. Soumya Swaminathan, she is the Chairperson of Dr. M .S. Swaminathan Research Foundation. Dr. Swaminathan is a global leader in science, a champion for sustainable development, and a strong advocate for mainstreaming women farmers’ roles. in agriculture. Mr. Shankar Maruwala is a co -founder and CEO of Ekstey Foundation. He is a pioneer in building digital public infrastructure that empowers people at scale and I am very proud to say that the government of Maharashtra and Ekstey Foundation together have brought out Mahavistar which more than 2 .5 million farmers are using today to get the advisories and information that they need on a daily basis.

The objective of this panel discussion is to move from vision to implementation. Specifically, we will deliberate on how to institutionalize AI within agriculture systems at scale, how to ensure inclusion, especially of women farmers and smallholders, how to build interoperable, trustworthy and sustainable AI governance ecosystems, and how to strengthen collaboration between the center and the center. states, global institutions, industry, and academia. The session is also an important precursor to AI4Agree 2026 Global Conference where we will continue these deliberations in greater operational depth with governments, investors, innovators, and development partners. AI4Agree Conference is being held in Mumbai on 22nd and 23rd of February at Jio World Convention Center. With this context, let’s begin our discussion. My first question is to Dr.

Devik Chaturvedi. Sir, under your leadership, the ministry has taken significant steps in advancing the digital agriculture mission and operationalizing the Agri -Stack framework. You are laying a strong digital foundation for the sector. As we now look, at integrating AI more systematically into agriculture, how do you envision the central state collaboration framework, specifically to ensure that AI deployments are aligned with national architecture while allowing states the flexibility to innovate based on local agroclimatic and socioeconomic context? And finally, how can we institutionalize this collaboration to achieve population scale impact while mentoring interoperability and data trust?

Devesh Chaturvedi

Thank you. A lot of questions in the same question. So what I’ll do is I’ll just first take you through the initiatives. First of all, we deeply appreciate the leadership taken by Maharashtra under obviously the leadership of our honorable chief minister and with the agriculture department. They have done exceptional work in digital agriculture mission by developing farmer IDs and digital computers. We’ve done a lot of survey. and also they launched Mahavistar as a precursor of Bharatvistar. And recently on 17th, government of India have also launched one of the first integrated AI -based system for the farmers, which is Bharatvistar, which presently is undertaking providing services both through the app, Android -based app, as well as through mobile telephony on weather advisories, ICR -based crop advisories, pest advisories, market information regarding various agriculture produced, traded in the Mondays, and lastly, the government schemes of government of India.

Now, why is this important, AI is important in agriculture? Like we did a lot of, we started with digitalization of services, different services, we had DBT, we had online systems of applying for various, a common person applying to the common services, and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of and we started to have a lot of But what was felt was that while we had initiated this process to ensure that the bureaucratic red tapism is removed, what we were moving towards was a sort of digital red tapism.

Because within our ministry, different schemes had different apps. And they had different ways of selection. And within the state also, horticulture had a different database of farmers. Agriculture had a different database. Animal health has a different database. Crop insurance has a different database. So basically, a farmer who has to avail so many services, we felt that he or she was getting lost in which app to use for which. And sometimes it becomes more difficult to avail the services through online systems or to get advisories than to go to a person and say, tell me how to do it. So the whole idea was that once we have this AI -based system, we have a same platform for different…

applications and different advisories at a click of the button or maybe just as a voice. So that is the whole idea of shifting towards AI -based solutions. So now what we have initially in the first phase in the artificial intelligence system, the Bharat Vistar or the Mahavistar of Maharashtra, is that the advisories, the crop advisories, the weather advisories, schemes information, information about how to apply and what is the status of that application and also the Monday rates, all these have been put in the one platform. You can just make a presently it is working in English and Hindi but in next three to six months we’ll be taking it towards all the Bhashani related languages.

And the next step is as you mentioned that the states are working together with us for the digital public infrastructure. So close to 9 crore farmer IDs have been developed. So what is a farmer ID and you must have read the statement of Honourable Finance Minister that DPI is the new UPI. so what is the basic this agri stack which is the part of DPI is that for agriculture is that we have each farmer has a unique farmer ID with the back end all the crops the person has sown, what is the land available to that person, all the data with the share of the land and the crop sown and the soil health card details if the soil health has been given so with these basic details available on the system it empowers the farmer through that ID to avail services because it is already approved by the relevant authorities in the government so the person does not have to or the authorities who are giving the services are not required to cross verify the credentials of the farmer based on those those based on the record of rights or maybe the whatever it was in the different states so every state in Maharashtra is one of the leading states here we are working together to have a saturation of farmer IDs and crop survey and once this is there then this AI will further transform into a very very tailored advisor so a person calls or gives the farmer idea to Aadhaar and at the back end we will based on the consent access the details of where the farmer is from, what is the crop being grown, what is soil hand conditions and very targeted advice will be given which will be made operational in next 3 to 6 months so instead of pushing data which may not be of interest of the farmers very specific tailored data for that farmer will be available based on integration of digital public infrastructure with Bharat Vistar and the third aspect will come when we do the predictive models and we tried that and you must have remembered in the inaugural session when Google CEO mentioned about that predictive model which we did about 3 .8 crore farmers we used 100 years data of IMD and a model to predict a monsoon for the next 1 month and for next week and that prediction was fairly accurate and farmers, we got the feedback the farmers did take that decision to sow and to irrigate based on the predictive model which was sent.

And now we will expand the predictive models to ensure that we get more advisories of the market situation, of the weather situation, which will help improving the decision making of the farmers and so that they can increase their productivity, reduce their costs. So that is the whole idea of AI in agriculture. And we hope that more and more farmers will adopt it and it will be not exactly a replacement but a sort of additionality to the human, we can say extension services which we find is not able to reach to the farmers because of the resource constraints of each state. The extension machinery, the KVKs or our state extension machineries, it’s very difficult to reach each and every farmer because of the fact that we can’t have a person sitting in each village reaching to each farmer.

But AI along with digital public infrastructure, along along with the mobile and internet penetration in the various rural areas, will ensure that that gap is removed and we get more and more access to the farmers on

Vikas Chandra Rastogi

model that provide just -in -time support to central and state governments, enabling them to experiment, iterate, and scale AI solutions responsibly.

Johannes Zutt

Thanks very much for those questions, and thank you also for the invitation to be here today. So we’re on the cusp of a major revolution in how support to farmers and agriculture is happening. I actually grew up on a farm. I worked on a farm from the ages 10 to 21. I think every hour I wasn’t in school, that I was actually at home. I was working in a farm. In some ways, it feels paleolithic, because we didn’t have computers. We had telephones that were connected to wires, and our ability to get information about what was happening around us was extremely limited. We spent a lot of time trying to find out the things that today you can find out very, very quickly using small AI for agriculture.

And that’s truly right. evolutionarily empowering for farmers. But, you know, to make that work for farmers, there’s a lot of things that need to go right. And I think it’s worth reflecting a little bit on the different roles that different actors in the ecosystem have, starting obviously with government. My colleague mentioned a number of these things earlier. The government’s responsibility is principally on foundations, communications, things like the governance of AI, the interoperability, obviously ensuring that educational programs include appropriate types of skilling in the use of digital services. This is a big challenge in countries like India, where frankly there are still people who don’t have sufficient literacy to read what comes over a basic smartphone ensuring that the research and extension…

Thank you. that is provided through these small AI platforms is credible, is trustworthy, is backed by science. I think that’s also extremely important. Of course, farmers will find out if they aren’t, but at high expense, right? So we want to make sure that they’re not being advised to do things that are negative for them. And then also looking at the costs of service, the connectivity, what does the farmer actually need to be able to link into these different types of platforms that give information? Because, of course, we’re often also talking about farmers who have very, very few assets and who may be essentially unable to stay permanently connected or who are not able to stay permanently connected.

They’re not able to stay permanently connected or even easily connected to the Internet. They’re going to have very basic smartphones, et cetera. So the government has a lot of… of work to do in all of those areas. Then you can look at what can the private sector do. Now, one thing that the government needs to do is encourage crowded and private sector capacity and capital. But once we turn to the private sector, what is the private sector’s principal advantage? I think that there’s a lot of creativity in the private sector. So the actual applications that are being developed are being developed by individuals in the private sector with a passion for specific sorts of issues that are constraining farmer success.

And that creativity will result in a number of different applications that will be aimed, in most cases, to help farmers overcome certain hurdles that they face. And we can kind of let a thousand flowers bloom there. And see what actually takes root. And it’s amazing what you start to see. Just yesterday I was learning about an application in Morocco developed by a tomato farmer who was able to give advice about how much water tomato plants need simply by taking a picture of the current tomato plant. Take a picture and it tells you how much water you actually need to give this plant, which obviously in a water -stressed environment is vital, vital information. And then there are roles for institutions like my own, the World Bank Group, which can help to provide some of the financing that helps develop these applications and also the foundational backbone for artificial intelligence.

And we can also play a role at the advisory end where we are helping to truth test, if you like, the information that’s coming through different applications that are coming in. Coming out of the AI sandbox in different contexts to make sure that it’s… actually providing information that’s useful to the end beneficiary and enhancing from a productivity perspective at the farm level.

Vikas Chandra Rastogi

Thanks. I think you have rightly pointed out the role of innovation and research. And what we see is we require high -quality, robust data to actually build upon that. And as Honorable Chief Minister mentioned, Maha AgEx is one step in that direction wherein we bring diverse data sets and make them accessible to researchers, academic institutions, departments, and also startups. And many of these startups we will see they are showcasing their innovations in AI for Agri conference in Mumbai. So we’ll request all of you to please come there and see for themselves what kind of excitement they have and what kind of solutions our MDA says. I have one supplementary question to you. How do you see a platform such as…

AI Impact Summit as well as AI for Agree Global Conference, contributing to deeper global collaboration and south -south knowledge exchange in this domain?

Johannes Zutt

Thank you for that additional question. I mean, obviously, India is in a great position to lead the development of AI, particularly for developing countries where there are still significant challenges helping poor people to escape poverty permanently. India has demonstrated digital innovation for a long period of time already. It’s got an enormous population with a huge variety. The challenges of bringing farmer -appropriate data to the farmers’ fingertips in India are… I was going to say India is a microcosm of the rest of the world. It’s hardly a microcosm. It’s so huge. But because you have so many languages, so many different regions, so many different types, so many different cultures, and the starting conditions at the farm level are so incredibly varied, figuring out how to make AI at the farm level work in India will automatically have a large number of spillover learnings for other countries around the world.

And because India, after China and the United States, is the country in the world that is best positioned actually to push all of this work forward, and because it is itself a developing country, it’s very, very clear that it will have a central role to play in South -South learning for those reasons.

Vikas Chandra Rastogi

Thank you so much. I move on to Dr. Swaminathan. Dr. Swaminathan, your father, Professor M .S. Swaminathan, played a historic role in shaping India’s agriculture transformation during the Green Revolution, ensuring food security at a critical juncture in our history. Today, as we speak of a new phase of transformation driven by AI, we are again at an inflection point. You have consistently championed science -based policy, sustainability, and the empowerment of women farmers. With 2026 being recognized internationally as the year of women farmers, how can we ensure that AI -led agriculture transformation strengthens women’s agency, knowledge access, and climate resilience? And what institutional safeguards and design principles must we embed today so that this new technological revolution becomes equitable, farmer -centric, and grounded in scientific integrity?

Dr. Soumya Swaminathan

Thank you very much for that question, Vikasji. Not only is this year the International Year of the Woman Farmer, but we know that agriculture itself is increasingly being feminized, with many men actually leaving farming to the women and migrating out. To the cities for other opportunities. So it is really essential to put women… at the center of all that we are discussing. And I think the Chief Minister today gave us a wonderful vision of what can be the future, provided, of course, like you said, that there are the guardrails, there are the institutions, there are the safeguards and the design principles that we think about from the very beginning. So my father, Professor M .S.

Parminathan, used to say that the Green Revolution was not only about the seeds. Of course, the seeds played a very big role. You know, the high -yielding varieties. But it was about the entire ecosystem and the institutions that were developed at that time, which included the outreach, the, you know, later on the Krishi Vigyan Kendras, of course, were developed, but also the access to credit, the water, the fertilizers, the education and empowerment, and ultimately became a success because farmers realized the potential of it and took it on. So. And what he used to say is that, you know, every technology. No technology is pro -poor or pro -rich or pro -woman or against women. It’s how we use that technology.

So it’s really, like you said, the inflection point today is how do we use this very powerful technology that’s come to us. So I think there are a few points here to make sure particularly that women farmers are not left behind. The first important fact is that women in India, the minority of them who have their name on the land document, so mostly it is in the man’s name, and Deveshji was telling me today that this is improving and that the latest census shows that perhaps at least a quarter of the properties are also in the name of women, either jointly or – but that still means that, you know, three -fourths of them don’t have.

And a system that operates – basically on publicly available data will then leave out those whose data sets are not available. So I think – I think it would be really important at the early stages itself to think about how women’s data can be incorporated. Because the algorithms are fed by the data we have. And so all of these advisories may be very suitable for a man who’s operating a tractor on a farm, but not at all relevant for a woman who’s still working with outdated instruments and trying to, you know, till her land. And particularly when we look at more remote areas, tribal areas, where women do a lot of the agriculture, like millets, for example.

Mostly it is women who grow millets. And there’s still a lot of mechanization which is absent completely. It is all still very much done using traditional methods and tools. And it involves a lot of drudgery. So I would say that, you know, one of the benchmarks that I would look at is, is it reducing the drudgery and the workload on women farmers? Is AI helping to do that? So I think we also need to think about that. We also need to look at certain indicators for success. And you mentioned science. I mean, I’m a medical. researcher and the way that we evaluate products is by doing clinical trials, by examining the data and the evidence and then recommending it for wider use.

So again, a note of caution would be to, as we roll it out, we need innovation certainly. We also need to do the evaluation, looking at inherent biases, looking at who’s being excluded, looking at are there unanticipated risks or side effects that we didn’t know about. But most of all, it’s this inclusion. I think we don’t want those who are already left behind to be further left out. So I think the ongoing research and data collection and feedback loops and most importantly, having the voices of those for whom we are developing all these. I think in the room, I don’t think we have any farmers or women farmers. So we are all discussing from what we know.

But if you’re the farmer, like you were saying, working there and you know the constraints and the which you’re working. So I think the women farmers and farmers in general must have a role. They must be part of these committees that evaluate or make recommendations or make suggestions on improvement. It has to be an iterative process. I think any technology is as good as the application for which it’s developed. I’ll give you one example of an app that the MS Farminathan Research Foundation developed for fisher women. We had a very successful app for fishermen called the Fisher Friendly Mobile App that won the UN Tech for Nature Award last year. But fisher women were as usual left out.

And so the Women Connect app actually gives them on a tablet information that they need to sell. Because once the fishermen have come back from seeds, the women who have to do all of the post -harvest, and the same is true for crops or fruits or vegetables as well. So that connection to the market, of course the information about pests and pathogens and when to buy what and what inputs to use. But also being able to organize themselves. And I think women And there are many FPOs now and FPCs and SHGs made of women farmers, empowering them and giving them the knowledge and tools. And the last thing I would say is we still need humans in the loop.

I don’t think we should think that completely making everything run by machines is going to solve our problems. I think it’s risky there. And in a country like India, we also need employment. And so we should think of, and I don’t know how many of you have seen this film called Humans in the Loop. But it’s a tribal woman from Jharkhand who actually raises questions about the algorithm. It’s a very thought -provoking film. So I think Humans in the Loop is going to be important. We have our Krishis, Sakhis and so on. We need to empower them with these. So I think AI and all these digital tools, if they’re used in addition to the traditional knowledge and wisdom that people have and augment it and give them at the right time, at the right place, the knowledge they need, I think we can go a very long way.

Thank you.

Vikas Chandra Rastogi

Thank you, madam. You have rightly. pointed out the need to be more sensitive while developing systems for inclusivity and to ensure that for whom they are being developed and they are in the loop and they are being consulted. In fact the feedback mechanism that we have developed in Mahavistar takes care of those requirements. I am also very happy to share that Government of Maharashtra and Dr. M .S. Swaminathan and his foundation are working together on some of these issues in terms of how to bring women’s right in farming at the center stage how do we create bio -happiness using our universities and educational systems and what kind of nutritional security we must look for because we have food security but it’s the nutritional security that we must aspire for.

We are happy to have support and assistance from MSSRF. in that direction. My final question is to Mr. Shankar Marubala. Mr. Shankar, XTAP has played a foundational role in shaping India’s DPI landscape through open source platforms such as Sunbird, which has powered large scale systems like Diksha, Mahavistar, and open network initiative built on backend protocol. These efforts have demonstrated how open standards and interoperable architecture can enable population scale transformation that we are already seeing today. As we now enter the era of AI driven public systems, how should we think about standardizing AI based ecosystem in a similar spirit? How can we bring DPI into AI? And what architecture and governance principles are required to ensure interoperability, trust and sustainability in AI deployments across sectors such as agriculture?

Shankar Maruwada

Again, a whole lot of questions, but let me. I’ll make my best attempt to answer those. more than 100 years ago the world faced what was known as a malthusian crisis where malthus the economist predicted that if we continue to grow in the same way we’ll run out of land we’ll run out of soil we were a billion and a half then we are eight billion most of us may not even have heard of the malthusian crisis what happened someone called haber and someone called bosch created a miracle haber synthesized ammonia using high pressure and temperature and bosch put it into an industrial process that phenomena is now historically known as pulling bread out of air it took a lot of effort and as samya said creation of a massive ecosystem germany which pioneered this lost that race to us because us did a better job of diffusing the technology safely to the farmers.

They created the discipline of agriculture engineering. They created institutions like the Fertilizer Development Center. They held technology demonstrations to farmers to show them how synthetic ammonia could be used. By the way, 50 % of the nitrogen in our body comes from synthetic nitrate ammonia. That’s a fact. We owe a lot to Heber and Bosch. China then took it on in the 80s by buying 10 big plants from Kellogg, training 300 million farmers, showing them how to use synthetic fertilizers. They went on to be the global leaders in agriculture. India is at a point where if you learn the lessons from such past things, our green revolution, our DPI experience, we are at a pivotal point where the equivalent of pulling bread out of thin air is pulling intelligence from the earth and providing it to the farmer this is again not science fiction Mahavistar, the pioneer along with Bharatvistar have taken the first steps to this so when a Mahavistar was designed to build off what Swami has said, it was designed with inclusion in mind inclusion, diversity was not an afterthought because to solve for not just Maharashtra’s problems, for India’s scale and diversity, we need to think of the last person the most discriminated in the remotest part of India and design systems that work for them we call that DPI now let me give you a specific example of this in Bharatvistar right from the beginning the design specs was we need an illiterate farmer to build off John’s point about digital literacy with a feature phone not a smart phone to be able to talk in his or her native language and native dialect Marathi itself has many dialects right talk on the phone like the way she is comfortable talking to another person ask the question have a conversation get a bunch of answers that process took us the better part of nine months why because it’s not just AI it’s data it’s processes it’s training the farm extension workers it is having trust on will this work what about the costing will I blow up my entire stage budget on a model right do I have autonomy can I switch models out in and out these are very very difficult questions it took us in partnership with a whole lot of people and we are working on a I mean, Government of Maharashtra led the effort, but IndiAI Mission, Bhashini, IIT Madras, IIIT Hyderabad, World Bank, Google, many other providers, everybody chipped in the little part of the solution.

Now, here’s the best part. Because we all collaboratively invested in figuring out a solution there, that solution could be deployed in Bharat Vistar with more confidence easily. Again, the same challenges that Secretary Chaturvedi talked about, do we have the data? He used a very nice phrase, digital red tapism, right? Our data is in different formats. What matters is the intent of the government. The government of India, which triggered the process, which allowed Bharat Vistar to be launched the day before, it’s a start. Data will get better, the systems will get better, usage will improve, that will generate more data, and then over time, years, the ecosystem will be built. This we know from our experience.

What makes this happen? What is that secret sauce, the design principles? It is the same as DPI. What worked for DPI, we are taking those same principles. One, open interoperable systems. Think networks and not just portals and platforms and siloed and fragmented systems. What’s the best example of this? The railways in India. We have such a vast landscape, but the rails are common. Every state can decide what it wants to move, private, public, defense, farming. The Indian railways is just providing a backbone. That allows. Everyone to. . . . . . . . . do this. There was a time when we had different rail gauges. Right? Now, that sounds so silly, but there was a time like that. India is showing that we don’t have to repeat those early mistakes in digital also.

By creating interoperable networks based on open protocols like Beacon, by collaborating with each other, one of us is bringing in data, somebody is bringing in technology, somebody is bringing in policy, somebody is bringing in research. These collaborative open networks and with the launch of Bharat Vistar puts India in a very unique and responsible position. Unique because we have these open rails. We have the experience of DPI. Responsible because it is a start. Unlike the technologies of the past where you perfect the technology and then deploy AI. you deploy something minimum to start and then evolution models get better, data gets better, usage gets better and then it gets better and better over time. That is the unique junction we are in in India.

What will that mean? When ICAR plugs into this network with its weather and pricing data, that network makes it available to any state that wishes to turn on the supply from ICAR. When a private sector comes out with a very innovative app, let’s say the tomato example that John talked about, any state can say, I like that. I think I will have that made available to my farmers. For the farmers, they anyway trust the state. They can go to the same app and now see this also there. If the tomato app person wants, they can go. They can go directly to each farmer. very expensive. So Shared Rails allows us to spread innovation, diffuse it very quickly through society, keeping in mind both inclusion and rewarding innovation because innovation has to be rewarded.

And I want to end with a very simple analogy. When Edmund Hillary climbed Mount Everest, he made a lot of people believe it is possible. When Mahavistar was launched, it made the country believe that it is possible to make AI serve the farmer. And to that extent, the responsibility that Mahavistar, Maharashtra government and government of India has is to create these pathways for the rest of the country for the other states. At XTEP Foundation, we made a declaration two days ago. We would like to see a world by 2030 where there are hundreds, hundreds such diffusion power. pathways each created by a different set of people in different sectors in different countries and continents but each inspiring different AI pathways to safe impact at scale and it’s a very exciting vision it’s a very collaborative vision if you all get together we can also create miracles in our own lifetime thank you

Vikas Chandra Rastogi

with that profound thought we’ll conclude today’s panel discussion I thank all the panelists they have really opened a new vision in front of all of us and we’ll invite all of you to AI for Agree conference in Mumbai on 22nd thank you so much we don’t have question actually a time to question the next session is about to start we can discuss that Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Maharashtra’s “Maha Agri AI Policy 2025‑2029” was announced by Chief Minister Devendra Fadnavis.”

The knowledge base records that Chief Minister Devendra Fadnavis presented Maharashtra’s Maha Agri AI Policy 2025‑2029, confirming its announcement.

Additional Contextmedium

“Mahavistar, an AI‑powered advisory network, serves farmers in Marathi and the tribal language Bili.”

The source notes that AI systems are provided to farmers in languages they understand through multiple channels, illustrating the language‑focused approach of Mahavistar, though it does not name the platform directly.

Additional Contextmedium

“AI can deliver hyper‑local weather forecasts for farmers.”

A knowledge‑base entry describes hyper‑local weather forecasting models, confirming that such fine‑grained forecasts are technically feasible.

Additional Contextmedium

“AI can provide early pest warnings, precision irrigation, fertiliser guidance, credit scoring and transparent supply‑chain information for farmers.”

The source on AI’s role for farmers highlights that AI eases key farming decisions (what to plant, when to plant, which inputs to use, when to sell), which aligns with the broader set of advisory functions mentioned.

Additional Contextmedium

“AI solutions must rest on trusted data, ethical governance and public accountability, as linked to the Prime Minister’s remarks.”

Other documents discuss the need for embedded AI governance frameworks, trusted data and responsible AI principles, providing additional context to the claim.

Confirmedmedium

“The Maha Agri AI policy has moved beyond pilots to a statewide interoperable data‑exchange platform and a traceability digital public infrastructure.”

The knowledge base states that the policy is shifting from demonstration projects to full‑scale implementation, confirming the transition beyond pilots, though it does not detail the specific platforms.

External Sources (74)
S1
AI for agriculture Scaling Intelegence for food and climate resiliance — -Vikas Chandra Rastogi: Secretary of Ministry of Agriculture and Farmers Welfare, Government of Maharashtra – leads the …
S2
AI Meets Agriculture Building Food Security and Climate Resilien — -Vikas Chandra Rastogi- Secretary, Ministry of Agriculture and Farmers’ Welfare, Government of Maharashtra (moderator/ho…
S3
AI Meets Agriculture Building Food Security and Climate Resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S4
AI Meets Agriculture Building Food Security and Climate Resilien — Thank you, sir, for your visionary address. You always continue to inspire us to aim higher and achieve better. And unde…
S5
https://dig.watch/event/india-ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S6
https://app.faicon.ai/ai-impact-summit-2026/ai-for-agriculture-scaling-intelegence-for-food-and-climate-resiliance — So we are happy to have support and assistance from MSSRF in that direction. My final question is to Mr. Shankar Maruwad…
S7
AI for agriculture Scaling Intelegence for food and climate resiliance — Thank you, madam. You have rightly pointed out the need to be more sensitive and while developing systems for inclusivit…
S8
AI Meets Agriculture Building Food Security and Climate Resilien — -Johannes Zutt- Regional Vice President, World Bank
S9
AI Meets Agriculture Building Food Security and Climate Resilien — This discussion focused on using artificial intelligence to enhance food security and climate resilience in agriculture,…
S10
How AI Drives Innovation and Economic Growth — -Johannes Zutt: World Bank representative (referred to as “John” in the discussion)
S11
AI Meets Agriculture Building Food Security and Climate Resilien — -Devendra Fadnavis- Honorable Chief Minister of Maharashtra
S12
AI for agriculture Scaling Intelegence for food and climate resiliance — Mr. Ramesh Chaturvedi, Secretary of Ministry of Agriculture and Farmers Welfare. Sir, please come onto the stage. Our Ho…
S13
AI for agriculture Scaling Intelegence for food and climate resiliance — – Devendra Fadnavis- Dr. Soumya Swaminathan
S14
AI Meets Agriculture Building Food Security and Climate Resilien — May I invite Dr. Devish Chaturvedi, Secretary, Ministry of Agriculture and Farmers’ Welfare. Sir, please come onto the s…
S15
AI Meets Agriculture Building Food Security and Climate Resilien — May I invite Dr. Devish Chaturvedi, Secretary, Ministry of Agriculture and Farmers’ Welfare. Sir, please come onto the s…
S16
AI for agriculture Scaling Intelegence for food and climate resiliance — Mr. Ramesh Chaturvedi, Secretary of Ministry of Agriculture and Farmers Welfare. Sir, please come onto the stage. Our Ho…
S17
Keynote-Mukesh Dhirubhai Ambani — Distinguished guests, my fellow Indians, namaste. The Global AI Impact Summit is a defining moment in India’s tech histo…
S18
AI That Empowers Safety Growth and Social Inclusion in Action — Thank you for that. I think it’s partly easy and partly tough because there was a lot to understand, but I think we can …
S19
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leadin…
S20
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — I come here to say that India can. And I think that’s the message I want to say. India can. And India can train state -o…
S21
https://app.faicon.ai/ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S22
Increasing routing security globally through cooperation | IGF 2023 WS #339 — In conclusion, the Netherlands Standardization Forum plays a vital role in promoting interoperability and advising the g…
S23
OPENING STATEMENTS FROM STAKEHOLDERS — The need for an open, accessible, and inclusive internet that empowers all individuals is emphasised, despite significan…
S24
Building Population-Scale Digital Public Infrastructure for AI — Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathwa…
S25
Collaborative AI Network – Strengthening Skills Research and Innovation — Okay. And thanks for bringing DPI into the picture because my next question is on that. Nandan announced yesterday 100 P…
S26
Why science metters in global AI governance — The evidence is very rapid. The field is moving so rapidly. In COVID, we had to review a couple of hundred publications …
S27
Democratizing AI Building Trustworthy Systems for Everyone — Hall clarifies that while the open data movement has been important, not all data can or should be completely open. She …
S28
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — “Data should be as the infrastructure.”[74]. “Often, the farmers don’t own, and then, of course, the… the model and th…
S29
Beyond universality: the meaningful connectivity imperative | IGF 2023 — Furthermore, an equality index is being worked on, indicating a focus on promoting gender equality and the inclusion of …
S30
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Examples of missing stakeholders include women’s rights organizations, trade unions, journalists, researchers who should…
S31
Press Conference: Closing the AI Access Gap — Finally, there is strong agreement among the speakers for trust-based, multi-stakeholder partnerships in AI. They argue …
S32
AI Governance Dialogue: Steering the future of AI — Infrastructure | Development | Legal and regulatory Martin identifies two critical areas requiring immediate collaborat…
S33
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S34
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — There is unexpected consensus among speakers from different backgrounds (academia, industry startup, and large corporati…
S35
AI Meets Cybersecurity Trust Governance &amp; Global Security — These key comments fundamentally shaped the discussion by challenging conventional assumptions about AI security and gov…
S36
Building Climate-Resilient Systems with AI — Explanation:Academic speakers unexpectedly emphasize moving beyond research and pilots to immediate deployment, showing …
S37
Driving Indias AI Future Growth Innovation and Impact — Summary:The main areas of disagreement center around regulatory approach (light-touch vs. balanced frameworks), implemen…
S38
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Finally, the analysis highlights the need for academics to propose alternatives to address biases in the digital medium….
S39
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S40
Safe and Responsible AI at Scale Practical Pathways — Guardrails, Human‑in‑the‑Loop, and Risk‑Assessment Mechanisms Are Essential for Reliable Deployment
S41
AI for agriculture Scaling Intelegence for food and climate resiliance — Artificial intelligence | Human rights and the ethical dimensions of the information society The minister emphasizes th…
S42
AI Meets Agriculture Building Food Security and Climate Resilien — Evidence:As Honorable PM said in his inaugural session, AI must be built on trusted data, ethical governance and public …
S43
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — “Data should be as the infrastructure.”[74]. “Often, the farmers don’t own, and then, of course, the… the model and th…
S44
AI Meets Agriculture Building Food Security and Climate Resilien — “Our next step is to bring AI into this framework in a responsible way”[15]. “We will deliberate on how to ensure inclus…
S45
AI for agriculture Scaling Intelegence for food and climate resiliance — Government’s foundational responsibilities include governance of AI, ensuring interoperability and accessibility, provid…
S46
The Future of Digital Agriculture: Process for Progress — Assessments from the grassroots level indicate a continued preference for these mediums for disseminating information an…
S47
ICT POLICY FOR AFGHANISTAN — –  MCIT, through the Ministry of Agriculture, Irrigation and Live Stock (MAIL) will adopt ICT in the planning, manageme…
S48
NATIONAL DIGITAL STRATEGY 2023 – 2030 — technology to better understand the efficacy of farming plans, their effects on crop health, and the environmental impac…
S49
NATIONAL INFORMATION AND COMMUNICATION TECHNOLOGY POLICY — – (a) Inadequate channels for information delivery among framers, businesses and policy – (a) Limited and ina…
S50
Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023 — Policy solutions were proposed as a means to bridge the digital divide and ensure that digital health truly advances hea…
S51
Digital Public Infrastructure, Policy Harmonization, and Digital Cooperation — – The need for capacity building and awareness, especially for youth and rural areas
S52
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — To address this, business models that can reach smallholder and low-income farmers are essential. For example, the “free…
S53
Survival Tech Harnessing AI to Manage Global Climate Extremes — Funding and Public-Private Partnerships: Significant attention was given to funding mechanisms through ANRF (Anusandhan …
S54
Democratizing AI: Open foundations and shared resources for global impact — Bernard Maissen: Yes, thank you. Hello, everybody, dear panelists. Nina, thank you for giving me the floor. In the globa…
S55
Multi-stakeholder Discussion on issues about Generative AI — It is crucial for individuals to understand how to utilize AI and other technological advancements effectively and respo…
S56
How AI Drives Innovation and Economic Growth — High level of consensus across diverse perspectives (World Bank, academia, legal scholarship, development practice) sugg…
S57
AI for agriculture Scaling Intelegence for food and climate resiliance — Maharashtra’s strategic approach represents a shift from pilot projects to population-scale implementation. The state’s …
S58
AI for agriculture Scaling Intelegence for food and climate resiliance — A lot of questions in the same question. So what I’ll do is I’ll just first take you through the initiatives. First of a…
S59
AI Meets Agriculture Building Food Security and Climate Resilien — Chief Minister Devendra Fadnavis presented Maharashtra’s Maha Agri AI Policy 2025-2029, emphasizing the shift from demon…
S60
AI Meets Agriculture Building Food Security and Climate Resilien — Maharashtra has implemented a comprehensive AI policy for agriculture that includes the Mahavistar platform, which provi…
S61
Democratizing AI Building Trustworthy Systems for Everyone — Hall clarifies that while the open data movement has been important, not all data can or should be completely open. She …
S62
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — “Data should be as the infrastructure.”[74]. “Often, the farmers don’t own, and then, of course, the… the model and th…
S63
The Foundation of AI Democratizing Compute Data Infrastructure — Garg emphasizes that effective DPI should provide both access to technology and agency for people to participate as co-c…
S64
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leadin…
S65
HIGH LEVEL LEADERS SESSION I — Sufficient representation of women and marginalised groups in collected data is highlighted as important. Intersectional…
S66
Leveraging the FOC at International Organizations | IGF 2023 Open Forum #109 — They have made commendable efforts to include and amplify the voices of marginalized communities who are often underrepr…
S67
Artificial Intelligence &amp; Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S68
WS #462 Bridging the Compute Divide a Global Alliance for AI — Ivy Lau-Schindewolf: Thank you for framing it that way, because I was sitting here thinking, you know, I really am not q…
S69
WS #279 AI: Guardian for Critical Infrastructure in Developing World — 5. Increase collaboration and knowledge sharing through international forums and regional partnerships. Gyan Prakash Tr…
S70
Open Forum #26 High-level review of AI governance from Inter-governmental P — Audrey Plonk: Thanks, Riti. I just want to say that governance is a lot more than regulation. Regulation is really imp…
S71
AI Governance Dialogue: Steering the future of AI — Infrastructure | Development | Legal and regulatory Martin identifies two critical areas requiring immediate collaborat…
S72
Keynote-Ankur Vora — Evidence:He explains the stakes: ‘For a farmer, every cropping season comes down to a handful of decisions. What to plan…
S73
Secure Finance Risk-Based AI Policy for the Banking Sector — Ajay Kumar Chaudhary opened by highlighting India’s opportunity to lead in AI development while managing associated risk…
S74
Secure Finance Risk-Based AI Policy for the Banking Sector — This discussion focused on the governance of artificial intelligence in India’s financial services sector, emphasizing t…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Devendra Fadnavis
4 arguments92 words per minute957 words621 seconds
Argument 1
AI can deliver hyper‑local weather, pest alerts, precision advice and transform extension services (Devendra Fadnavis)
EXPLANATION
Fadnavis argues that artificial intelligence can overcome the limitations of traditional agricultural extension by providing timely, location‑specific information such as weather forecasts, pest warnings, and precise irrigation and fertilizer recommendations. This capability can dramatically improve farmer decision‑making and productivity.
EVIDENCE
He notes that conventional extension systems cannot match the required scale and speed, and then lists AI-enabled services including hyper-local weather predictions, early pest outbreak alerts, precision irrigation and fertilizer guidance, credit scoring based on crop intelligence, transparent traceable supply chains, and real-time market advisories [51-53].
MAJOR DISCUSSION POINT
AI can deliver hyper‑local weather, pest alerts, precision advice and transform extension services
AGREED WITH
Devesh Chaturvedi, Shankar Maruwada
Argument 2
AI must be built on trusted data, ethical governance and public accountability to achieve scale (Devendra Fadnavis)
EXPLANATION
He stresses that the success of AI in agriculture depends on trustworthy data, transparent ethical governance, and mechanisms for public accountability. Without these foundations, large‑scale adoption will be hindered.
EVIDENCE
He cites the Prime Minister’s statement that AI must be built on trusted data, ethical governance, and public accountability, warning that without trust, scaling will not happen [55-57].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources emphasize the need for reliable data, ethical standards and public accountability for AI in agriculture, as highlighted in [S1] and the discussion on trusted AI at scale in [S20].
MAJOR DISCUSSION POINT
AI must be built on trusted data, ethical governance and public accountability to achieve scale
AGREED WITH
Shankar Maruwada, Devesh Chaturvedi, Johannes Zutt
DISAGREED WITH
Shankar Maruwada, Devesh Chaturvedi
Argument 3
Inclusion and gender equity are core pillars of the AI agenda, ensuring women benefit from advisory services (Devendra Fadnavis)
EXPLANATION
Fadnavis highlights gender equity as a fundamental pillar of Maharashtra’s AI strategy, insisting that AI tools must be designed to serve women farmers and address their specific needs. This ensures that the benefits of AI reach all segments of the farming community.
EVIDENCE
During his address he declares inclusion and gender equity as a mantra and a core pillar of the 2026 AI agenda, urging that AI solutions be designed for women farmers, not merely for them [76-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gender inclusivity and equity in AI for agriculture are discussed in the IGF 2023 session on leveraging AI to support gender inclusivity [S19].
MAJOR DISCUSSION POINT
Inclusion and gender equity are core pillars of the AI agenda, ensuring women benefit from advisory services
AGREED WITH
Dr. Soumya Swaminathan, Johannes Zutt, Vikas Chandra Rastogi
DISAGREED WITH
Dr. Soumya Swaminathan
Argument 4
Mobilise venture capital, impact investors, multilateral development banks and philanthropic foundations to scale AI platforms (Devendra Fadnavis)
EXPLANATION
He calls for a broad coalition of financial actors—including venture capital, impact investors, multilateral development banks, and philanthropic foundations—to fund and scale AI‑driven agricultural platforms. Such investment is presented as essential for moving from pilots to large‑scale implementation.
EVIDENCE
He explicitly invites venture capital funds, impact investors, multilateral development banks, corporate innovation arms, and philanthropic foundations to partner with Maharashtra for scaling AI solutions [76-78].
MAJOR DISCUSSION POINT
Mobilise venture capital, impact investors, multilateral development banks and philanthropic foundations to scale AI platforms
AGREED WITH
Johannes Zutt, Shankar Maruwada, Vikas Chandra Rastogi
D
Devesh Chaturvedi
1 argument174 words per minute1183 words406 seconds
Argument 1
Integrated AI platform (BharatVistar/Mahavistar) consolidates advisories, schemes and market data, eliminating digital “red‑tapism” (Devesh Chaturvedi)
EXPLANATION
Chaturvedi explains that the new AI‑based system unifies multiple agricultural services—weather, crop, pest advisories, market rates, and government schemes—into a single platform, removing the need for farmers to navigate numerous separate apps. This integration tackles the problem of digital “red‑tapism”.
EVIDENCE
He describes how different schemes previously required separate apps, causing farmers to get lost, and how the AI platform now brings advisories, weather, scheme information, and market rates together in one place, eliminating the fragmented digital experience [122-129] and [130-132].
MAJOR DISCUSSION POINT
Integrated AI platform (BharatVistar/Mahavistar) consolidates advisories, schemes and market data, eliminating digital “red‑tapism”
S
Shankar Maruwada
3 arguments134 words per minute1271 words567 seconds
Argument 1
Open Digital Public Infrastructure (farmer IDs, Agri‑Stack) provides a trusted data foundation for AI personalization (Shankar Maruwada)
EXPLANATION
Maruwada argues that assigning unique farmer IDs and building an Agri‑Stack creates a reliable, consent‑driven data backbone that enables AI to deliver highly personalized advice. This infrastructure is essential for scaling AI services responsibly.
EVIDENCE
He details the creation of nearly 9 crore farmer IDs, explaining that each ID links to crop, land, and soil health data, allowing AI to generate tailored recommendations based on consented information, with full functionality expected within three to six months [135-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of open digital public infrastructure such as farmer IDs and Agri-Stack for trusted AI personalization is noted in the agreement on open interoperable systems [S4] and the DPI for AI overview [S24].
MAJOR DISCUSSION POINT
Open Digital Public Infrastructure (farmer IDs, Agri‑Stack) provides a trusted data foundation for AI personalization
AGREED WITH
Devesh Chaturvedi, Vikas Chandra Rastogi, Devendra Fadnavis
DISAGREED WITH
Devendra Fadnavis, Devesh Chaturvedi
Argument 2
Open standards, interoperable network architecture (shared “rails”) are essential for seamless data exchange (Shankar Maruwada)
EXPLANATION
He emphasizes that open, interoperable standards—likened to shared railway tracks—are crucial for creating a national data network that can be accessed by any state or private actor. This approach prevents siloed systems and promotes rapid diffusion of innovations.
EVIDENCE
He outlines the principle of open interoperable systems, comparing them to India’s railways that provide a common backbone, and stresses that networks built on open protocols like Beacon enable collaborative data sharing across states and sectors [300-307].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open standards and interoperable network architectures are advocated for secure data exchange in [S22] and the need for an open, inclusive internet in [S23].
MAJOR DISCUSSION POINT
Open standards, interoperable network architecture (shared “rails”) are essential for seamless data exchange
AGREED WITH
Devendra Fadnavis, Devesh Chaturvedi
Argument 3
Vision of hundreds of diffusion pathways worldwide by 2030, leveraging open, interoperable AI ecosystems (Shankar Maruwada)
EXPLANATION
Maruwada presents a forward‑looking vision where, by 2030, hundreds of distinct diffusion pathways—each driven by different stakeholders and regions—will spread AI solutions globally, mirroring the open‑rail model. This aims to accelerate responsible AI impact at scale.
EVIDENCE
He states that the XTEP Foundation declared a goal for 2030 to see “hundreds, hundreds such diffusion pathways each created by a different set of people in different sectors in different countries and continents” inspiring AI impact at scale [331-332].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The vision of multiple diffusion pathways by 2030 is outlined in the population-scale DPI for AI discussion [S24] and the 100 pathways to 2030 initiative [S25].
MAJOR DISCUSSION POINT
Vision of hundreds of diffusion pathways worldwide by 2030, leveraging open, interoperable AI ecosystems
J
Johannes Zutt
4 arguments146 words per minute934 words381 seconds
Argument 1
Government must ensure AI advice is scientifically credible, address literacy and connectivity gaps for farmers (Johannes Zutt)
EXPLANATION
Zutt asserts that governments need to guarantee that AI‑driven advisories are backed by sound science and that barriers such as low digital literacy and limited connectivity are tackled. This ensures farmers receive reliable, usable information.
EVIDENCE
He outlines the government’s role in governing AI, ensuring interoperability, providing education and skilling, and guaranteeing that AI advice is scientifically credible, while also highlighting challenges of farmer literacy, limited smartphone capability, and connectivity issues [154-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Government responsibilities to ensure scientifically credible AI advice, address digital literacy and connectivity gaps are detailed in [S7], with emphasis on science-based governance in [S26].
MAJOR DISCUSSION POINT
Government must ensure AI advice is scientifically credible, address literacy and connectivity gaps for farmers
AGREED WITH
Shankar Maruwada
Argument 2
Multilingual, low‑tech access (voice, feature‑phone) is needed for illiterate and remote farmers, many of whom are women (Johannes Zutt)
EXPLANATION
He stresses that AI solutions must work on basic feature phones and support voice interaction in multiple local languages to reach illiterate, remote, and predominantly female farmers. Such design ensures inclusivity across linguistic and technological divides.
EVIDENCE
He recounts his own farm background, then notes the need for low-tech solutions such as voice-based interfaces on feature phones, supporting multiple languages, to serve farmers who lack smartphones or stable internet connections [146-166].
MAJOR DISCUSSION POINT
Multilingual, low‑tech access (voice, feature‑phone) is needed for illiterate and remote farmers, many of whom are women
AGREED WITH
Devendra Fadnavis, Dr. Soumya Swaminathan, Vikas Chandra Rastogi
DISAGREED WITH
Devesh Chaturvedi
Argument 3
The World Bank can provide financing, sandbox testing and validation of AI applications, fostering trustworthy deployment (Johannes Zutt)
EXPLANATION
Zutt highlights the World Bank’s capacity to fund AI projects, run sandbox environments for testing, and validate applications, thereby supporting trustworthy and scalable AI deployment in agriculture.
EVIDENCE
He mentions that the World Bank Group can help provide financing, sandbox testing, and validation of AI applications, and also assist in truth-testing information from various apps to ensure usefulness and productivity gains [178-180].
MAJOR DISCUSSION POINT
The World Bank can provide financing, sandbox testing and validation of AI applications, fostering trustworthy deployment
AGREED WITH
Devendra Fadnavis, Shankar Maruwada, Vikas Chandra Rastogi
Argument 4
AI Impact Summit and AI for Agree conference serve as hubs for South‑South knowledge exchange and deeper global collaboration (Johannes Zutt)
EXPLANATION
He points out that the AI Impact Summit and the upcoming AI for Agree conference are platforms for sharing experiences, fostering South‑South learning, and deepening international collaboration on AI for agriculture.
EVIDENCE
He states that India’s position enables it to lead AI development for the global south, and that events like the AI Impact Summit and AI for Agree conference will facilitate South-South knowledge exchange and broader collaboration [189-197].
MAJOR DISCUSSION POINT
AI Impact Summit and AI for Agree conference serve as hubs for South‑South knowledge exchange and deeper global collaboration
D
Dr. Soumya Swaminathan
2 arguments176 words per minute1140 words387 seconds
Argument 1
Systematic evaluation, bias mitigation and a “human‑in‑the‑loop” approach safeguard equity and reliability (Dr. Soumya Swaminathan)
EXPLANATION
Swaminathan calls for rigorous evaluation of AI tools, including bias checks and continuous monitoring, while keeping humans in the decision loop to ensure equity, reliability, and alignment with scientific standards.
EVIDENCE
She emphasizes the need for systematic evaluation, bias mitigation, and a “human-in-the-loop” approach, citing the importance of clinical-trial-like testing, feedback loops, and the film “Humans in the Loop” that highlights the role of farmers in questioning algorithms [235-262].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Systematic evaluation, bias mitigation and human-in-the-loop approaches are underscored in the science-focused AI governance analysis [S26].
MAJOR DISCUSSION POINT
Systematic evaluation, bias mitigation and a “human‑in‑the‑loop” approach safeguard equity and reliability
AGREED WITH
Johannes Zutt, Shankar Maruwada, Devendra Fadnavis
Argument 2
AI solutions must centre women farmers, incorporate their land‑ownership data, and reduce drudgery (Dr. Soumya Swaminathan)
EXPLANATION
She argues that AI must be designed to address the specific circumstances of women farmers, including integrating their land‑ownership information and focusing on reducing manual labor burdens.
EVIDENCE
She notes that most women lack land titles, which can exclude them from data-driven services, and stresses that AI should reduce drudgery for women, especially in tribal and remote areas where women handle labor-intensive crops like millets [219-229].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Supporting women farmers and addressing gender gaps in AI solutions is highlighted in the IGF gender inclusivity session [S19].
MAJOR DISCUSSION POINT
AI solutions must centre women farmers, incorporate their land‑ownership data, and reduce drudgery
AGREED WITH
Devendra Fadnavis, Johannes Zutt, Vikas Chandra Rastogi
DISAGREED WITH
Devendra Fadnavis
V
Vikas Chandra Rastogi
2 arguments102 words per minute1602 words934 seconds
Argument 1
Mahavistar’s feedback loop enables real‑time, farmer‑centred AI improvements (Vikas Chandra Rastogi)
EXPLANATION
Rastogi highlights that Mahavistar incorporates a feedback mechanism allowing farmers to report experiences, which in turn refines AI models and services in near real‑time, ensuring the system remains farmer‑centric.
EVIDENCE
He states that the feedback mechanism built into Mahavistar addresses inclusivity and ensures continuous improvement based on farmer input, and mentions ongoing collaboration with the Swaminathan Foundation on women’s rights and nutritional security [267-269].
MAJOR DISCUSSION POINT
Mahavistar’s feedback loop enables real‑time, farmer‑centred AI improvements
Argument 2
Collaboration with MSSRF aims to embed women’s rights and nutritional security into AI‑driven agriculture (Vikas Chandra Rastogi)
EXPLANATION
Rastogi notes that the Maharashtra government is partnering with the M. S. Swaminathan Research Foundation to integrate women’s rights and nutritional security considerations into AI‑enabled agricultural initiatives.
EVIDENCE
He mentions joint work with Dr. M. S. Swaminathan and his foundation on bringing women’s rights to the centre of farming, creating bio-happiness through universities, and focusing on nutritional security, supported by MSSRF [269-271].
MAJOR DISCUSSION POINT
Collaboration with MSSRF aims to embed women’s rights and nutritional security into AI‑driven agriculture
AGREED WITH
Devendra Fadnavis, Dr. Soumya Swaminathan, Johannes Zutt
Agreements
Agreement Points
Unified, interoperable AI platform that consolidates weather, pest, market, and scheme information, eliminating fragmented digital services
Speakers: Devendra Fadnavis, Devesh Chaturvedi, Shankar Maruwada
AI can deliver hyper‑local weather, pest alerts, precision advice and transform extension services (Devendra Fadnavis) Integrated AI platform (BharatVistar/Mahavistar) consolidates advisories, schemes and market data, eliminating digital ‘red‑tapism’ (Devesh Chaturvedi) Open standards, interoperable network architecture (shared “rails”) are essential for seamless data exchange (Shankar Maruwada)
All three speakers stress that AI services should be delivered through a single, open and interoperable platform that brings together hyper-local weather, pest alerts, market rates and scheme information, thereby avoiding the need for farmers to navigate multiple apps [53-58][122-132][300-307].
POLICY CONTEXT (KNOWLEDGE BASE)
National digital agriculture strategies stress interoperable public infrastructure such as Agri-Stack and farmer IDs to enable AI personalization, and call for open data standards to avoid fragmented services [S45][S51][S43].
Trusted data foundations, ethical governance and public accountability are essential for scaling AI in agriculture
Speakers: Devendra Fadnavis, Shankar Maruwada, Devesh Chaturvedi, Johannes Zutt
AI must be built on trusted data, ethical governance and public accountability to achieve scale (Devendra Fadnavis) Open Digital Public Infrastructure (farmer IDs, Agri‑Stack) provides a trusted data foundation for AI personalization (Shankar Maruwada) Integrated AI platform … eliminates digital ‘red‑tapism’ … (Devesh Chaturvedi) (implies trusted data via farmer IDs) Government must ensure AI advice is scientifically credible, address literacy and connectivity gaps for farmers (Johannes Zutt)
The speakers converge on the need for reliable, consent-driven data and transparent, ethical governance structures to build trust and enable AI at scale, highlighting farmer IDs, open standards and scientific credibility [55-57][298-301][135-138][154-166].
POLICY CONTEXT (KNOWLEDGE BASE)
AI-in-agriculture guidelines highlight trusted data, ethical governance and public accountability as core principles, reinforced by algorithmic transparency and rigorous testing mandates [S42][S39][S40][S35].
Inclusion and gender equity must be central to AI‑driven agriculture, ensuring women farmers benefit equally
Speakers: Devendra Fadnavis, Dr. Soumya Swaminathan, Johannes Zutt, Vikas Chandra Rastogi
Inclusion and gender equity are core pillars of the AI agenda, ensuring women benefit from advisory services (Devendra Fadnavis) AI solutions must centre women farmers, incorporate their land‑ownership data, and reduce drudgery (Dr. Soumya Swaminathan) Multilingual, low‑tech access (voice, feature‑phone) is needed for illiterate and remote farmers, many of whom are women (Johannes Zutt) Collaboration with MSSRF aims to embed women’s rights and nutritional security into AI‑driven agriculture (Vikas Chandra Rastogi)
All four speakers highlight that AI initiatives must be designed to reach women farmers, address land-ownership gaps, reduce manual labor, and be accessible via low-tech, multilingual interfaces, reflecting a shared commitment to gender-inclusive development [76-78][219-229][146-166][267-269].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs from the Maharashtra AI for Agriculture initiative and IGF 2023 discussions explicitly commit to gender-responsive AI solutions and equitable benefits for women farmers [S44][S38][S50].
Open, interoperable digital public infrastructure (farmer IDs, Agri‑Stack, Maha AgEx) is the backbone for AI personalization and data exchange
Speakers: Shankar Maruwada, Devesh Chaturvedi, Vikas Chandra Rastogi, Devendra Fadnavis
Open Digital Public Infrastructure (farmer IDs, Agri‑Stack) provides a trusted data foundation for AI personalization (Shankar Maruwada) Integrated AI platform … (Devesh Chaturvedi) (mentions farmer IDs and Agri‑Stack) Collaboration … Maha AgEx … bring diverse data sets … (Vikas Chandra Rastogi) The Maha AgEx, which is an open, federated and consent‑driven architecture for data exchange … (Devendra Fadnavis)
Speakers agree that a nationwide, open, consent-driven data architecture-embodied in farmer IDs, the Agri-Stack and the Maha AgEx-provides the essential foundation for scalable, personalized AI services [135-138][183-184][25-26].
POLICY CONTEXT (KNOWLEDGE BASE)
Government documents identify open, interoperable digital public infrastructure-including farmer IDs, Agri-Stack and Maha AgEx-as foundational for AI-driven personalization and data exchange [S45][S51][S43].
Mobilising diverse financing and partnership models (venture capital, impact investors, multilateral development banks, World Bank) is crucial for scaling AI solutions
Speakers: Devendra Fadnavis, Johannes Zutt, Shankar Maruwada, Vikas Chandra Rastogi
Mobilise venture capital, impact investors, multilateral development banks and philanthropic foundations to scale AI platforms (Devendra Fadnavis) The World Bank can provide financing, sandbox testing and validation of AI applications, fostering trustworthy deployment (Johannes Zutt) Collaboration … with many partners … (Shankar Maruwada) We invite venture capital funds, impact investors, multilateral development banks, corporate innovation arms, and philanthropic foundations … (Vikas Chandra Rastogi)
All four participants call for a broad coalition of public and private financing, including venture capital, impact investors, multilateral banks and development agencies, to move AI projects from pilots to large-scale deployment [76-78][178-180][286-287][78-80].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple policy analyses advocate blended financing-venture capital, impact investors and multilateral development banks-to scale AI in agriculture, citing World Bank-led partnership frameworks and public-private funding mechanisms [S53][S55][S56][S54].
Capacity building, digital literacy and low‑tech solutions are needed to reach illiterate, remote and resource‑constrained farmers
Speakers: Johannes Zutt, Shankar Maruwada
Government must ensure AI advice is scientifically credible, address literacy and connectivity gaps for farmers (Johannes Zutt) Design … for an illiterate farmer … voice on feature phone … (Shankar Maruwada)
Both speakers stress that AI services must be accessible via simple, voice-based interfaces on feature phones and that literacy and connectivity gaps must be addressed through capacity-building initiatives [154-166][286-287].
POLICY CONTEXT (KNOWLEDGE BASE)
ICT policy assessments and inclusion studies stress the need for capacity-building, digital-literacy programmes and low-tech channels (radio, SMS) to serve illiterate and remote farmers [S38][S46][S49][S51][S48].
Rigorous evaluation, bias mitigation and human‑in‑the‑loop mechanisms are needed to ensure equitable and reliable AI outcomes
Speakers: Dr. Soumya Swaminathan, Johannes Zutt, Shankar Maruwada, Devendra Fadnavis
Systematic evaluation, bias mitigation and a “human‑in‑the‑loop” approach safeguard equity and reliability (Dr. Soumya Swaminathan) World Bank … truth test … ensure usefulness … (Johannes Zutt) Iterative process, feedback loops … improve over time … (Shankar Maruwada) AI must be built on trusted data, ethical governance and public accountability … (Devendra Fadnavis)
The speakers converge on the necessity of continuous, scientific evaluation, bias checks, and keeping humans in decision loops to maintain trust, equity and effectiveness of AI tools [235-262][178-180][295-298][55-57].
POLICY CONTEXT (KNOWLEDGE BASE)
Safe and Responsible AI at Scale frameworks and algorithmic transparency recommendations call for rigorous evaluation, bias mitigation and human-in-the-loop safeguards for reliable deployment [S40][S39][S38][S41].
Similar Viewpoints
Both emphasize that AI should provide comprehensive, hyper‑local advisory services through a single integrated platform, replacing fragmented extension systems [53-58][122-132].
Speakers: Devendra Fadnavis, Devesh Chaturvedi
AI can deliver hyper‑local weather, pest alerts, precision advice and transform extension services (Devendra Fadnavis) Integrated AI platform (BharatVistar/Mahavistar) consolidates advisories, schemes and market data, eliminating digital ‘red‑tapism’ (Devesh Chaturvedi)
Both argue that trustworthy, consent‑driven data infrastructure is a prerequisite for scaling AI responsibly in agriculture [55-57][298-301].
Speakers: Devendra Fadnavis, Shankar Maruwada
AI must be built on trusted data, ethical governance and public accountability to achieve scale (Devendra Fadnavis) Open Digital Public Infrastructure (farmer IDs, Agri‑Stack) provides a trusted data foundation for AI personalization (Shankar Maruwada)
Both stress the government’s role in ensuring scientific credibility, ethical governance and accountability for AI deployments [55-57][154-166].
Speakers: Devendra Fadnavis, Johannes Zutt
AI must be built on trusted data, ethical governance and public accountability to achieve scale (Devendra Fadnavis) Government must ensure AI advice is scientifically credible, address literacy and connectivity gaps for farmers (Johannes Zutt)
Both highlight gender equity as essential, calling for AI designs that specifically address women farmers’ needs and rights [76-78][219-229].
Speakers: Devendra Fadnavis, Dr. Soumya Swaminathan
Inclusion and gender equity are core pillars of the AI agenda, ensuring women benefit from advisory services (Devendra Fadnavis) AI solutions must centre women farmers, incorporate their land‑ownership data, and reduce drudgery (Dr. Soumya Swaminathan)
Both underline the importance of farmer IDs and the Agri‑Stack as the data backbone for AI personalization and service delivery [135-138].
Speakers: Shankar Maruwada, Devesh Chaturvedi
Open Digital Public Infrastructure (farmer IDs, Agri‑Stack) provides a trusted data foundation for AI personalization (Shankar Maruwada) Integrated AI platform … farmer IDs … (Devesh Chaturvedi)
Both advocate for low‑tech, voice‑based, multilingual interfaces to reach illiterate and remote farmers [146-166][286-287].
Speakers: Johannes Zutt, Shankar Maruwada
Multilingual, low‑tech access (voice, feature‑phone) is needed for illiterate and remote farmers, many of whom are women (Johannes Zutt) Design … for an illiterate farmer … voice on feature phone … (Shankar Maruwada)
Both call for rigorous testing, validation and human oversight to ensure AI tools are reliable and equitable [235-262][178-180].
Speakers: Dr. Soumya Swaminathan, Johannes Zutt
Systematic evaluation, bias mitigation and a “human‑in‑the‑loop” approach safeguard equity and reliability (Dr. Soumya Swaminathan) World Bank … truth test … ensure usefulness … (Johannes Zutt)
Unexpected Consensus
Gender equity and women’s empowerment as a central pillar of AI policy
Speakers: Devendra Fadnavis, Dr. Soumya Swaminathan
Inclusion and gender equity are core pillars of the AI agenda, ensuring women benefit from advisory services (Devendra Fadnavis) AI solutions must centre women farmers, incorporate their land‑ownership data, and reduce drudgery (Dr. Soumya Swaminathan)
Despite coming from a political leadership perspective (Devendra Fadnavis) and a medical research background (Dr. Soumya Swaminathan), both converge on making gender equity a foundational element of AI-driven agriculture, which was not an obvious alignment given their different domains [76-78][219-229].
POLICY CONTEXT (KNOWLEDGE BASE)
Gender-equity pillars are highlighted in recent AI-for-agriculture policy statements, including the Maharashtra AI programme and global inclusion agendas at IGF 2023 [S44][S38][S50].
Private‑sector foundation advocating for open, public‑good style data standards alongside government calls for the same
Speakers: Shankar Maruwada, Devendra Fadnavis
Open standards, interoperable network architecture (shared “rails”) are essential for seamless data exchange (Shankar Maruwada) The Maha AgEx, which is an open, federated and consent‑driven architecture for data exchange … (Devendra Fadnavis)
It is unexpected that a private-sector founder (Shankar Maruwada) and a state minister (Devendra Fadnavis) both champion open, interoperable data infrastructures modeled on public-good principles, indicating cross-sector convergence on governance models [300-307][25-26].
POLICY CONTEXT (KNOWLEDGE BASE)
International discussions on open AI foundations and shared resources note growing alignment between private-sector initiatives and government calls for public-good data standards [S54][S55][S56].
Overall Assessment

The panel displayed strong consensus across multiple dimensions: the need for a unified, open AI platform; the centrality of trusted data and ethical governance; gender‑inclusive design; open digital public infrastructure; diversified financing; capacity building for low‑tech access; and rigorous evaluation with human oversight.

High consensus – the convergence of viewpoints among government officials, international experts, researchers and private‑sector innovators suggests a solid foundation for coordinated policy action and implementation, enhancing the likelihood of scalable, equitable AI deployment in agriculture.

Differences
Different Viewpoints
How to ensure trustworthy data and governance for AI in agriculture
Speakers: Devendra Fadnavis, Shankar Maruwada, Devesh Chaturvedi
AI must be built on trusted data, ethical governance and public accountability to achieve scale (Devendra Fadnavis) Open Digital Public Infrastructure (farmer IDs, Agri‑Stack) provides a trusted data foundation for AI personalization (Shankar Maruwada) Integrated AI platform (BharatVistar/Mahavistar) consolidates advisories, schemes and market data, eliminating digital ‘red‑tapism’ (Devesh Chaturvedi)
Fadnavis stresses that AI can only scale if there is upfront trusted data, transparent ethical governance and public accountability mechanisms [55-57]. Maruwada focuses on building an open, consent-driven infrastructure (farmer IDs, Agri-Stack) that will gradually become trustworthy as data improves, emphasizing openness and iterative improvement rather than strict governance controls [135-138][300-307]. Chaturvedi highlights the need to eliminate fragmented apps by integrating services into a single AI platform, pointing to the problem of “digital red-tapism” and the practical benefits of consolidation, but does not elaborate on governance frameworks [122-132]. The three speakers therefore differ on whether the priority should be strong governance safeguards before scaling, or building open infrastructure first and refining governance over time.
POLICY CONTEXT (KNOWLEDGE BASE)
Ensuring trustworthy data and governance is a recurring policy challenge, addressed in AI security governance debates, algorithmic transparency mandates and national AI-in-agriculture guidelines [S35][S39][S40][S42][S45].
Concrete mechanisms to include women farmers in AI‑driven agriculture
Speakers: Devendra Fadnavis, Dr. Soumya Swaminathan
Inclusion and gender equity are core pillars of the AI agenda, ensuring women benefit from advisory services (Devendra Fadnavis) AI solutions must centre women farmers, incorporate their land‑ownership data, and reduce drudgery (Dr. Soumya Swaminathan)
Fadnavis declares gender equity a mantra and calls for AI solutions to be designed for women farmers, but provides no specific operational steps [76-78]. Swaminathan stresses that most women lack land titles, which can exclude them from data-driven services, and argues that AI must explicitly integrate women’s land-ownership information and aim to reduce manual labour burdens, especially for tribal women growing millets [219-229]. The disagreement lies in the level of detail: a high-level commitment versus a demand for concrete data-inclusion mechanisms.
POLICY CONTEXT (KNOWLEDGE BASE)
Concrete inclusion mechanisms-such as gender-responsive data collection, targeted extension services, and women-focused AI pilots-are outlined in the Maharashtra AI for Agriculture programme and gender-equity recommendations at IGF 2023 [S44][S38][S50].
Preferred technology channel for reaching illiterate and remote farmers
Speakers: Johannes Zutt, Devesh Chaturvedi
Multilingual, low-tech access (voice, feature-phone) is needed for illiterate and remote farmers, many of whom are women (Johannes Zutt) Integrated AI platform (BharatVistar/Mahavistar) delivers advisories via app and voice in multiple languages, but primary focus is on app-based delivery [130-134]
Zutt argues that AI solutions must work on basic feature phones with voice interaction and support many local languages to reach illiterate, remote, and women farmers [154-166]. Chaturvedi describes the AI platform as an app-based service that currently works in English and Hindi, with plans to add more languages and voice capability in the next three to six months, indicating a later, secondary priority for low-tech access [130-134]. The tension is between prioritising low-tech, voice-first solutions versus an app-centric rollout.
POLICY CONTEXT (KNOWLEDGE BASE)
Studies on ICT adoption in agriculture identify radio, SMS and community-based platforms as effective low-tech channels for illiterate and remote farmers, highlighting infrastructure gaps and channel preferences [S46][S49][S48][S47].
Unexpected Differences
Emphasis on immediate versus iterative deployment of AI services
Speakers: Devendra Fadnavis, Shankar Maruwada
AI must be built on trusted data, ethical governance and public accountability to achieve scale (Devendra Fadnavis) Open standards and interoperable networks allow rapid diffusion of innovations, with a minimum viable AI deployed first and improved over time (Shankar Maruwada)
Fadnavis calls for strong governance and trust before large-scale rollout, whereas Maruwada advocates launching a minimal AI solution quickly and iteratively enhancing it as data and systems improve, which is a surprising contrast in deployment philosophy [55-57][316-319].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between rapid, immediate deployment and cautious, iterative rollout is reflected in consensus on pragmatic AI adoption, calls to move beyond pilots, and regulatory debates on implementation speed [S34][S36][S37][S42].
Overall Assessment

The panel shows broad consensus on the promise of AI for agriculture, the need for open interoperable infrastructure, and the importance of inclusive design. However, key disagreements emerge around the sequencing of governance versus rapid deployment, the concrete mechanisms for women’s inclusion, and the priority of low‑tech voice solutions versus app‑centric platforms.

Moderate – while all participants share the same overarching goal, the differing views on governance rigor, implementation pathways, and inclusion specifics could affect policy design and rollout timelines, requiring careful coordination to reconcile these perspectives.

Partial Agreements
All speakers agree that AI has the potential to transform agricultural extension by providing timely, localized information, but they differ on the primary means to achieve this—whether through open interoperable infrastructure, integrated platforms, or government‑led credibility and connectivity measures [51-53][300-307][122-132][154-166].
Speakers: Devendra Fadnavis, Shankar Maruwada, Devesh Chaturvedi, Johannes Zutt
AI can deliver hyper‑local weather, pest alerts, precision advice and transform extension services (Devendra Fadnavis) Open standards, interoperable network architecture (shared “rails”) are essential for seamless data exchange (Shankar Maruwada) Integrated AI platform consolidates advisories, schemes and market data, eliminating digital ‘red‑tapism’ (Devesh Chaturvedi) Government must ensure AI advice is scientifically credible, address literacy and connectivity gaps (Johannes Zutt)
All agree that financing and partnership with private investors, multilateral institutions and foundations are essential for scaling AI, though Fadnavis emphasizes a broad coalition, Maruwada stresses the role of open infrastructure as a foundation, and Zutt highlights the World Bank’s specific financing and sandbox role [76-78][135-138][178-180].
Speakers: Devendra Fadnavis, Shankar Maruwada, Johannes Zutt
Mobilise venture capital, impact investors, multilateral development banks and philanthropic foundations to scale AI platforms (Devendra Fadnavis) Open Digital Public Infrastructure provides a trusted data foundation for AI personalization (Shankar Maruwada) The World Bank can provide financing, sandbox testing and validation of AI applications (Johannes Zutt)
Takeaways
Key takeaways
AI can transform Indian agriculture by delivering hyper‑local weather forecasts, pest alerts, precision irrigation/fertiliser advice and credit scoring, thereby augmenting traditional extension services. Maharashtra’s Maha Agri‑AI Policy (2025‑2029) and the AI‑powered platforms Mahavistar and BharatVistar consolidate advisories, market data, and scheme information into a single, multilingual, voice‑enabled service for over 2.5 million farmers. Open Digital Public Infrastructure – farmer IDs, Agri‑Stack, and the Maha AgEx data‑exchange – provides a trusted, consent‑driven data foundation that enables personalized AI advice and interoperable data sharing across states and sectors. Responsible AI governance is essential: data must be trustworthy, AI systems transparent, auditable, explainable, and include a “human‑in‑the‑loop” to ensure scientific credibility and mitigate bias. Inclusion and gender equity are core pillars; AI solutions must incorporate women’s land‑ownership data, reduce drudgery, and be accessible via low‑tech channels (voice, feature phones) in multiple local languages. Scaling requires coordinated investment from venture capital, impact investors, multilateral development banks and philanthropic foundations, together with a clear regulatory framework and single‑window facilitation. Global collaboration and South‑South knowledge exchange are critical; the AI Impact Summit and the upcoming AI for Agree conference will serve as platforms for sharing use‑cases, best practices and financing mechanisms.
Resolutions and action items
Launch and operationalise the Maha Agri‑AI Policy 2025‑2029, expanding Mahavistar/BharatVistar to cover all Indian languages within the next 3‑6 months. Scale the Maha AgEx interoperable data‑exchange to integrate diverse agricultural datasets (crop surveys, soil health cards, scheme eligibility) and make them available to researchers, startups and state platforms. Invite venture capital funds, impact investors, multilateral development banks, corporate innovation arms and philanthropic foundations to partner in scaling AI advisory platforms, traceability DPI modules and agri‑tech startups. Conduct systematic pilot‑to‑platform transitions, with continuous feedback loops (e.g., Mahavistar’s farmer feedback) to iteratively improve AI models and ensure they remain farmer‑centred. Embed women’s land‑ownership and livelihood data into the Agri‑Stack early on, and design AI services that specifically address women’s drudgery and market‑linkage needs. Establish a “human‑in‑the‑loop” governance mechanism involving extension workers, farmer groups (FPOs, SHGs) and scientific bodies to validate AI recommendations and monitor bias. Organise and promote participation in the AI for Agree Global Conference (22‑23 Feb 2026, Mumbai) and related AI Impact Summit sessions to showcase Indian innovations and foster South‑South exchange.
Unresolved issues
How to effectively bridge digital literacy and connectivity gaps for illiterate, low‑asset farmers who rely on basic feature phones. Concrete mechanisms for ensuring scientific validation and credibility of AI‑generated advisories across diverse agro‑ecological zones. Detailed framework for data privacy, consent management and protection of farmer data, especially for women who may not be listed on land records. Long‑term financing models and sustainability plans for maintaining and updating AI platforms beyond initial pilot funding. Specific standards and protocols for AI model interoperability across states and private‑sector applications (beyond the high‑level “open rails” concept). Metrics and evaluation criteria to monitor gender equity outcomes, bias mitigation and overall impact on farmer incomes and climate resilience.
Suggested compromises
Adopt an open, interoperable “rail” architecture that allows states to retain flexibility for local innovation while adhering to national AI standards and data‑governance rules. Deploy a minimum viable AI solution first (e.g., voice‑based advisory on feature phones) and iteratively enhance functionality as data quality and user adoption improve. Balance private‑sector creativity (“let a thousand flowers bloom”) with public‑sector oversight and financing to ensure equitable access and prevent market capture. Combine AI‑driven advice with existing human extension services, using AI as an augmentation tool rather than a replacement, to preserve employment and local trust.
Thought Provoking Comments
AI is not a magic. As the Prime Minister said, AI must be built on trusted data, ethical governance and public accountability. Without trust, scale will not happen.
Highlights that technology alone cannot deliver impact; institutional trust and ethical frameworks are prerequisites for large‑scale adoption, reframing AI from a purely technical solution to a governance challenge.
Set the tone for the rest of the panel, prompting other speakers to discuss data governance, transparency, and the need for trustworthy AI systems. It led directly to Devesh Chaturvedi’s explanation of ‘digital red‑tapism’ and Johannes Jett’s focus on credibility of AI advice.
Speaker: Devendra Fadnavis (Chief Minister, Maharashtra)
We are unveiling a blueprint for a traceability DPI that will ensure end‑to‑end visibility across value chains, enhancing food safety, export competitiveness and consumer trust – and it is not proprietary, but a replicable public‑infrastructure model for India and the global south.
Introduces the concept of an open, public‑good traceability system, positioning India as a model for other developing nations and shifting the conversation from isolated pilots to systemic, scalable infrastructure.
Prompted discussion on interoperability and open standards, influencing Shankar Maruwada’s later analogy of railways as shared rails and reinforcing the panel’s emphasis on open, federated architectures.
Speaker: Devendra Fadnavis
AI solutions must be designed with women farmers, not merely for them – 2026 is the International Year of Women in Agriculture, and gender equity must be a core mantra.
Explicitly brings gender inclusion into the AI agenda, moving the dialogue beyond technical deployment to social equity and prompting deeper examination of data gaps and design biases.
Triggered Dr. Soumya Swaminathan’s detailed remarks on women’s land‑ownership data, the need for early incorporation of women’s data, and the concept of ‘humans in the loop’, thereby deepening the gender‑focused segment of the discussion.
Speaker: Devendra Fadnavis
We realized that having many separate apps for different schemes created a new form of ‘digital red‑tapism’. A single AI‑powered platform that aggregates weather, pest alerts, market rates and scheme information can cut through this fragmentation.
Diagnoses a systemic problem—fragmented digital services—and proposes a concrete, user‑centric solution, shifting the conversation from abstract policy to practical system design.
Validated the Chief Minister’s earlier trust‑building point and led to further elaboration on farmer IDs and consent‑driven data exchange, influencing Johannes Jett’s remarks on the need for unified, trustworthy platforms.
Speaker: Devesh Chaturvedi (Secretary, Ministry of Agriculture & Farmers’ Welfare)
The government’s role includes ensuring that educational programs provide the digital skilling needed for AI tools, and that the research and extension services behind those tools are scientifically credible and trustworthy.
Broadens the discussion to capacity building and scientific validation, emphasizing that AI efficacy depends on both user literacy and the quality of underlying data and models.
Spurred the panel to consider the human capacity gap, leading to references to digital literacy, the need for ‘truth‑testing’ AI outputs, and reinforcing Dr. Swaminathan’s call for rigorous evaluation and feedback loops.
Speaker: Johannes Jett (World Bank Regional Vice President)
We can act as a ‘sandbox’ to truth‑test the information coming from diverse AI applications, ensuring they actually improve farm‑level productivity before scaling.
Introduces a concrete mechanism—an AI sandbox—for validation and risk mitigation, adding a layer of accountability to the deployment pipeline.
Influenced subsequent dialogue on evaluation frameworks, echoed in Dr. Swaminathan’s emphasis on clinical‑trial‑like assessments and Shankar Maruwada’s point about deploying a minimum viable AI system and iterating.
Speaker: Johannes Jett
Women often lack land titles, so publicly available data will miss them. We must think early about how to incorporate women’s data, otherwise AI advisories will be irrelevant for a large share of farmers.
Spotlights a concrete data bias that could undermine AI’s inclusivity, linking gender equity to technical data collection and model training practices.
Deepened the gender equity thread, prompting the panel to discuss data collection reforms, feedback mechanisms, and the necessity of women’s representation in advisory committees.
Speaker: Dr. Soumya Swaminathan (Chair, MSSRF)
Open, interoperable systems are like India’s railways – a common backbone that lets any state or private player plug in services. We should avoid siloed portals and instead build shared rails using open protocols like Beacon.
Provides a vivid analogy that reframes digital architecture as a national commons, emphasizing openness, scalability, and collaborative innovation.
Served as a turning point that unified earlier points about openness, interoperability, and governance, leading to consensus on building AI on the same DPI foundations and encouraging cross‑sector collaboration.
Speaker: Shankar Maruwada (Co‑founder & CEO, EkStep Foundation)
Deploy AI minimally first, then let data, usage, and models improve over time – a ‘minimum viable AI’ approach rather than perfecting technology before launch.
Challenges the common tendency to over‑engineer solutions before field testing, advocating an iterative, learning‑by‑doing methodology suitable for large, diverse populations.
Shifted the conversation from perfectionism to pragmatic scaling, reinforcing earlier calls for pilots to become platforms and influencing the concluding vision of rapid, responsible diffusion.
Speaker: Shankar Maruwada
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the conversation from high‑level enthusiasm to concrete, inclusive, and governance‑focused strategies. Early statements about trust, open traceability, and gender‑centric design set the agenda, prompting participants to unpack systemic challenges such as fragmented digital services, data bias, and capacity gaps. The introduction of validation mechanisms (AI sandbox) and the railway analogy for interoperable infrastructure crystallized a shared vision of an open, iterative, and accountable AI ecosystem. Collectively, these thought‑provoking comments shaped a narrative that emphasized responsible scaling, public‑good infrastructure, and equitable outcomes, steering the panel toward actionable commitments rather than abstract promises.

Follow-up Questions
How can high‑quality, robust data be collected, integrated and shared to support AI models for agriculture?
Both emphasized that AI requires reliable data and highlighted the need for better data pipelines and standards.
Speaker: Vikas Chandra Rastogi, Johannes Zutt
How can we ensure that women farmers’ data and land‑ownership information are incorporated into AI‑driven advisory systems?
She pointed out that most women lack land titles, risking exclusion from data‑based services, and called for mechanisms to capture women’s data.
Speaker: Dr. Soumya Swaminathan
What evaluation frameworks and indicators should be established to monitor bias, exclusion, and unintended risks in AI agricultural applications?
She advocated for clinical‑trial‑style assessments, bias audits, and continuous feedback loops to guarantee equitable outcomes.
Speaker: Dr. Soumya Swaminathan
How can we maintain a human‑in‑the‑loop approach and address employment impacts while scaling AI solutions for farmers?
He warned that full automation could displace jobs and stressed the need for human oversight and livelihood protection.
Speaker: Johannes Zutt
What strategies are needed to improve connectivity and device accessibility for smallholder farmers with limited assets or basic phones?
He highlighted that many farmers lack smartphones or reliable internet, which could limit AI service reach.
Speaker: Johannes Zutt
How should AI ecosystems be standardized—what architecture, open standards, and governance principles are required to ensure interoperability, trust and sustainability across sectors?
He called for open, interoperable networks (like DPI) and shared protocols to enable scalable AI deployment.
Speaker: Shankar Maruwada
How can AI platforms be designed for illiterate or low‑literacy farmers, using voice and local dialects to ensure inclusive access?
He described the need for voice‑based, multilingual interfaces that work on feature phones for the most marginalized users.
Speaker: Shankar Maruwada
What mechanisms and investment models can accelerate the transition from AI pilots to population‑scale platforms in agriculture?
He mentioned the need for venture capital, impact investors, and public‑private partnerships to move from demonstration to execution.
Speaker: Devendra Fadnavis
How can the AI Impact Summit and AI4Agree Global Conference be leveraged to deepen South‑South knowledge exchange and collaborative AI‑for‑agri initiatives?
He noted India’s role as a testbed and the importance of structured platforms for sharing lessons across developing nations.
Speaker: Johannes Zutt

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI for Social Good Using Technology to Create Real-World Impact

AI for Social Good Using Technology to Create Real-World Impact

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit opened with a focus on using artificial intelligence to achieve population-scale impact across sectors such as education, health and agriculture, emphasizing that such impact requires built-in coordination mechanisms [1-3]. James Manyika highlighted Google’s belief that universal access to AI is essential and cited the AlphaFold breakthrough, noting that its open protein-structure database is now used by over three million researchers in 190 countries, with India ranking fourth among adopters [10-16].


He argued that expanding access hinges on digital public infrastructure and open networks, which provide the coordination layer that turns human intent into real-world action, and pointed to India’s UPI and Bhashini systems as leading examples [17-21]. Google’s partnership on Project Vani, delivering free speech data for more than 100 Indic languages through the government’s Bhashini mission, was presented as a concrete step toward linguistic inclusion [23-25]. The company also described a Gemini-powered open network piloted in Uttar Pradesh that gives smallholder farmers multilingual AI agents for credit and crop prediction, illustrating how open, decentralized architectures can scale agricultural support [30-33][35-40].


Nandan Nilekani reinforced that AI, as a general-purpose technology, spreads most rapidly when built on open networks that lower transaction complexity for users, citing UPI’s open payment architecture and multilingual agents that enable inclusive services [73-82][84-95]. Sang-Boo Kim explained that the World Bank’s AgriConnect uses an open-stack, user-centric model and is being prepared for expansion into health and education, aiming to become a universal network for multiple sectors [102-108]. Kiran Mazumdar-Shaw outlined a “health stack” that aggregates phenotypic, genomic and demographic data, arguing that AI can risk-profile populations and integrate insurance, while also envisioning a convergence of biological and artificial intelligence to enable predictive, preventive medicine and even cellular reprogramming [115-129][133-149].


Sunil Wadhwani emphasized that India’s digital public infrastructure supplies both data pipelines and distribution channels, enabling AI-driven solutions such as cough-sound TB screening that raised case detection by 25 % and a low-cost reading-assessment tool reaching millions of students [170-207][208-220]. He warned that affordable AI inference is critical for scale, illustrating how plugging Google’s improved weather model into an open AgriConnect network could instantly benefit ten million farmers, and noting the need to reduce inference costs to enable mass adoption [246-255][264-268].


In a rapid “lightning round,” Nilekani called for massive diffusion of AI applications to farmers worldwide, Mazumdar-Shaw urged the establishment of sustainable, universal health-care standards, and Wadhwani reported growing demand from governments in the Global South for India-originated AI platforms [301-308][311-313][316-322]. The moderator concluded that the discussion demonstrated AI’s benefits can only be realized through open networks that serve everyone, and invited participants to apply for Google.org impact challenges to further drive population-scale change [340-345].


Keypoints


Major discussion points


Open digital public infrastructure (DPI) and open networks are the essential coordination layer that lets AI turn human intent into real-world action at population scale.


James Manyika stresses that “digital public infrastructure and open networks … provide the coordination layer” [19-21]; Nandan Nilekani cites UPI as an “open network for payments” that enabled massive growth [77-78]; Sunil Wadhwani explains that DPI supplies both data pipelines and distribution channels needed for AI models to reach citizens [170-174].


AI-driven multilingual agents and language-localisation are seen as the primary vehicle for inclusive diffusion, especially for farmers and informal workers.


Manyika notes the need to “ensure that the digital divide does not become an AI divide” and highlights multilingual AI agents for agriculture [30-33]; Nilekani expands on this, describing how agents that “remove complexity for the user” and operate in local languages can achieve massive inclusion [80-95].


Sector-specific pilots demonstrate AI’s multiplier effect in agriculture, health, and education, and are presented as models for global replication.


– Agriculture: Gemini-powered open network in Uttar Pradesh provides credit and crop-prediction services [30-33]; the same network can ingest Google’s improved weather model to serve 10 million farmers [252-254].


– Health: AI-based cough-sound TB screening and same-day lab automation increased TB detection by 25 % nationally [196-200]; AI-enabled ASHA workers are mentioned as a “powerful” deployment [130-132].


– Education: A 20-second speech-based reading diagnostic reaches millions of children at a cost of five paise per student, now being mandated across several Indian states [208-214][218-220].


Cost-effective, decentralized AI inference is identified as a critical bottleneck that must be solved before true population-scale impact can be realized.


Nilekani argues that “the cost of AI inference has to drop dramatically” for mass adoption [246-249]; he illustrates how an open network lets new models (e.g., weather forecasts) be plugged in instantly for millions of users [252-255]; Manyika reinforces that cheap inference combined with agent-based interfaces is the key to diffusion [243-245][261-263].


India’s open-network blueprint is positioned as a replicable global template, with a push to export standards and solutions to other developing regions.


Manyika describes the “blueprint born right here in India” being scaled to Brazil, Nigeria, Ethiopia, Kenya [28-34]; Sang-Boo Kim talks about extending AgriConnect to health and education and adapting the model to three African countries, Brazil and the Philippines [108-115][229-236]; Sunil notes rising interest from governments across the Global South seeking to adopt India’s AI platforms [321-323].


Overall purpose / goal of the discussion


The summit convenes global leaders to argue that open, interoperable digital public infrastructure combined with AI can create a universal coordination rail, enabling rapid, equitable delivery of AI-powered services in education, healthcare, agriculture and beyond. By showcasing India’s successes and outlining concrete pilots, the participants aim to catalyze worldwide adoption of these open-network models and invite further collaboration (e.g., Google.org Impact Challenge) [1-4][337-345].


Overall tone and its evolution


The conversation maintains a highly optimistic, collaborative, and forward-looking tone throughout. It begins with a visionary framing by the moderator and James Manyika, moves into enthusiastic sharing of concrete successes by each panelist, and repeatedly emphasizes partnership (“we’re building together”, “we’re proud to be partners”) [8-12][28-34][164-170]. Even when technical challenges (cost of inference, data-sharing reluctance) are raised, the tone stays constructive, focusing on solutions rather than criticism. The closing lightning-round and applause reinforce a celebratory, hopeful mood, ending on a note of collective commitment rather than doubt.


Speakers

James Manyika – Senior Vice President, Google (Alphabet); Co-Chair of the UN Secretary-General’s High-Level Advisory Body on AI. [S1][S3]


Moderator (Ashwani) – Conference moderator.


Kiran Mazumdar-Shaw – Chairperson, Biocon Group; pioneering biotech entrepreneur, healthcare visionary, and philanthropist. [S7]


Sangbu Kim – Vice President for Digital and AI, World Bank; leads digital economy growth, infrastructure, cybersecurity, data-privacy, and modernization of government services. [S10]


Nandan Nilekani – Co-founder and Chairman, Infosys; global leader in digital public infrastructure and co-founder of Networks for Humanity. [S14]


Sunil Wadhwani – Founder and Director, Wadhwani Institute for Artificial Intelligence; entrepreneur and philanthropist focused on AI-driven solutions for health, education, and agriculture.


Additional speakers:


None (no other participants spoke beyond those listed above).


Full session reportComprehensive analysis and detailed insights

Opening remarks & vision


The moderator framed artificial intelligence (AI) as a catalyst for population-scale transformation in education, health and agriculture-provided coordination is built into the system-and introduced James Manyika, Google’s senior vice-president for research, labs and technology in society, to set the stage [1-7]. Manyika asserted that universal AI access is essential for expanding global innovation capacity [10-11], highlighted the “breathtaking” speed of progress and cited AlphaFold’s breakthrough, whose freely available database now supports more than three million researchers in 190 countries, with India ranking fourth among adopters [14-16]. He warned that the digital divide must not become an AI divide and argued that real-world impact requires expanding access from the outset [17-19]. According to Manyika, digital public infrastructure (DPI) and open networks constitute the coordination layer that translates human intent into concrete action [20-21]; India’s Unified Payments Interface (UPI) and the Bhashini language-infrastructure exemplify this [21-22]. Google’s partnership with the Indian Institute of Science on Project Vani, which has released speech data for over 100 Indic languages-including 20 previously undocumented digitally-demonstrates a concrete step toward linguistic inclusion [23-25]. He also noted a $10 million Google.org grant to the Network for Humanity Foundation that builds universal tools (FinInternet, asset tokenisation, BEK) and establishes innovation labs from Singapore to Switzerland [36-41]. Sector-specific pilots were described: a Gemini-powered open network in Uttar Pradesh that offers multilingual AI agents for credit and crop-prediction to smallholder farmers [30-33]; AI-enabled agents for 1.4 million frontline health workers providing early malnutrition warnings [43]; AI integration into the national pest-surveillance system for crops [44-45]; and an education platform that has already reached ten million learners and aims to serve 75 million students and two million educators by 2027 [46-48]. He concluded by urging bold, responsible AI development that bridges the AI divide [49-50].


Panel discussion


Nandan Nilekani framed AI as a general-purpose technology whose fastest diffusion depends on open networks that lower transaction complexity [73-78]. He likened UPI’s open architecture to the massive growth of the world’s largest payment system and argued that similar openness enables countless innovators to build AI applications on the edge [79-80]. Multilingual AI agents hide complexity for users-whether a farmer or a small-scale electricity producer-by allowing transactions in the user’s own language, thereby achieving massive inclusion [81-82]. He stressed that language is a barrier in India, where code-mixing of English, Hindi and regional tongues is common, and that initiatives such as the government’s Bhashini mission, AI for Bharat and Google’s Project Vani are working to make every language digitally accessible [84-95]. Nilekani argued that inference cost must drop dramatically (a query costing a few hundred rupees is unsustainable) and urged a shift from focusing on model training to cheap inference [246-249].


Sang-Boo Kim described the World Bank’s AgriConnect as a farmer-oriented, open-stack network that delivers coherent, user-centric services [102-106]. He positioned the evolution from supplier-driven to customer-driven models as central to the AI era and noted that AgriConnect’s open standards make it affordable and efficient [105-107]. The platform is being prepared for expansion beyond agriculture into health and education, with the ambition to become a universal network for multiple sectors [108-109]. Using a sommelier analogy, Kim emphasized the need for tailored standards while trimming the wine-analogy for brevity [236-240].


Kiran Mazumdar-Shaw outlined a consent-based, open “health stack” that aggregates phenotypic, genomic, demographic, radiological and treatment-outcome data, mirroring the model used for UPI [122-124]. Such a stack would enable rapid risk-profiling of populations, integration with insurance products and, ultimately, universal, preventive care [125-129]. She noted that AI could be layered onto the existing ASHA community-health-worker programme to extend reach and effectiveness [130-132]. Mazumdar-Shaw also highlighted biology’s low-energy, distributed computing (cells as tiny data centres) as a lesson for AI efficiency and envisioned a convergence of biological and artificial intelligence that could enable virtual-cell modelling, cellular re-programming and precision medicine [135-144].


Sunil Wadhwani (Wadhwani AI) emphasized that DPI supplies two practical benefits: robust data pipelines and distribution channels that allow AI models to reach citizens at scale [170-174]. He illustrated this with a TB-screening programme that uses smartphone-recorded cough sounds, increasing national case detection by 25 % and providing same-day lab results through AI-driven automation [196-200]. Predictive algorithms also identify patients at risk of dropping out of medication, allowing 2 000 caseworkers to focus on the most vulnerable [205-206]. In education, a 20-second speech-based reading diagnostic costing only five paise per student has been mandated for millions of children across several Indian states, with a goal of reaching 75 million by the end of next year [208-210]; these interventions are underpinned by DPI platforms such as NIXI for health data and Rakshak for education [191-197][215-218]. Wadhwani reported a surge of interest from Global South governments eager to adopt the 25 AI platforms built on India’s DPI, reinforcing the blueprint’s export potential [321-324] and recalled the Prime Minister’s “India for the world” remark at Bharat Mandapam [316-318].


Key point of contention – Nilekani argued that inference cost must drop dramatically (a query costing a few hundred rupees is unsustainable) and urged a shift toward cheap inference [246-249]; Manyika countered that an open network allows new, more accurate models (e.g., Google’s improved weather forecast) to be plugged in instantly for ten million farmers, suggesting that integration capability itself is a scaling catalyst [250-255].


Agreements – All speakers stressed that DPI and open, interoperable networks constitute the coordination layer that enables AI to turn intent into real-world action; multilingual AI agents were identified as the vehicle for inclusive diffusion; affordable inference and low-cost deployment (e.g., five paise per student) were deemed essential for population-scale impact; and sector-specific pilots in agriculture, health and education were presented as proof-of-concepts for global replication. These shared positions align with policy literature that recognises DPI as a foundation for digital inclusion and AI-driven public services [S23-S25].


Disagreements centred on the relative importance of inference cost versus open-network integration, the primacy of open networks versus DPI as the backbone for AI deployment, and whether standardisation or cheap inference should be prioritised for cross-country scaling [246-249][250-254][229-236].


Closing & next steps


In the lightning-round, Nilekani called for massive diffusion of AI applications on open networks to reach millions of farmers worldwide [301-308]; Mazumdar-Shaw urged the establishment of sustainable, universal health-care standards that are diagnostic, preventive, predictive and precise [311-314]; Wadhwani highlighted the growing demand from Global South governments for India-originated AI platforms and reiterated the Prime Minister’s vision of “India for the world” in the age of AI [316-324]. Manyika thanked the panel, reaffirmed that AI’s benefits can only be realised through open networks that serve everyone, and invited participants to apply for the Google.org Impact Challenges (AI for Science and Government Innovation) and visit the exhibition booths [340-345]; the moderator concluded with a QR-code invitation for the Impact Challenges [340-345].


Overall, the summit conveyed a highly optimistic and collaborative tone, moving from a visionary framing of AI’s potential, through concrete examples of multilingual agents, DPI-enabled data pipelines, and sectoral pilots, to a shared commitment to scale these models globally while addressing challenges such as inference cost, standardisation and data-sharing governance. Open, decentralized digital infrastructure coupled with affordable AI is positioned as a public good capable of delivering inclusive, population-scale transformation [8-12][28-34][170-174][246-255][301-308][340-345].


Session transcriptComplete transcript of the session
Moderator

Because we believe that AI’s true potential lies in its ability to deliver population -scale impact, transforming education, healthcare, and agriculture for every citizen. However, that impact can only be possible when there’s coordination that’s built into the system. And therefore, today, we are here joined by global leaders to explore how open networks and digital public infrastructure can create a global, interoperable coordination rail, powered by AI to translate intent into action across borders. To set the stage, it’s my honor to introduce James Manika. James is the Senior Vice President at Google, leading research, labs, and technology in society. He also served as the co -chair of the UN’s High -Level Advisory Board on AI. James, welcome. The floor is yours to set the stage.

James Manyika

Thank you, Ashwani. Good morning, everyone. It’s a real pleasure and privilege to be back in India and to join all of you here at the India AI Impact Summit. At Google, we believe that access to AI is essential for unlocking opportunities and expanding the innovation capacity for people everywhere. The rapid technological progress that we’re seeing in AI’s development is really quite breathtaking and represents an extraordinary opportunity to solve problems and empower people, power economies, advance science, and tackle some of society’s greatest challenges. Indeed, we’re beginning to see the impact of this, so it’s not just in the future, but we’re already starting to see some of these benefits. benefits and impacts materialize today. Take science, for example.

Five years ago, our AlphaFold system, which is our Nobel Prize winning innovation, solved the 50 -year grand challenge of protein structure prediction. And since then, the freely available AlphaFold protein database has been used by more than 3 million researchers in over 190 countries. And in fact, India is actually the fourth largest adopter and user of the protein database, where people are working on a variety of problems, everything from neglected diseases all the way to even breeding resistance, soya beans, and a whole range of things that are incredibly beneficial to people in India and beyond. But to take full advantage of this potential, we need to collectively expand access right from the beginning. As you may have heard our CEO Sundar Pichai say yesterday, we need to ensure that the digital divide does not and AI divide.

Digital public infrastructure and open networks are an important part of making this possible. They provide the coordination layer that allows AI to translate human intent into real -world action. And India has been leading the way with systems like UPI and Bashini network and infrastructure, bringing the capabilities of AI into the daily lives of people across the country and at population scale. At Google, we’ve been a very committed partner in this journey by helping to build the foundations that help to scale it. For instance, our collaboration with the Indian Institute of Science, and in particular on Project Vani, has now completed its second phase, where we’ve been covering every Indian state, making speech data for over 100 Indic languages available for free.

And we’ve been able to do this through the government of India’s Bashini mission. In fact, this includes 20 languages that had never been spoken before. been recorded before digitally that we’re now building onto these systems in ways that truly try to attempt to reflect India’s true linguistic and cultural richness and diversity. And we continue to build on our commitment to drive scaled impact at the grassroots level. This commitment to scaled impact is reflected in our recent partnership with the World Bank, and I’m sure we’ll talk about this later today. Together we’re taking a blueprint born right here in India and scaling it by localizing it across the globe to countries from Brazil to Nigeria, Ethiopia, and Kenya.

And the heart of this blueprint began with our partnership with the government of Uttar Pradesh. There we piloted a Gemini -powered open network for agriculture that provides farmers with multilingual AI agents to facilitate everything from credit to crop prediction. By taking the lessons we learned in Uttar Pradesh, where digital tools drove real… measurable impact. We’re proving that a small holder farmer can compete and execute on the value that they create rather than the platforms that they’re on. This isn’t just a regional success. It’s now a global architecture and a model that can be taken everywhere for global digital inclusion. The success of these networks depend on a single fundamental principle. It must remain decentralized and open.

This is the driving force behind our support for the Networks for Humanity Foundation. Again, one of the things we’ll talk about this morning. And through a $10 million Google .org grant that we announced last year, the Network for Humanity Foundation is building the universal tools for tomorrow from the FinInternet, for asset tokenization, to the BEK and open networks. And by establishing innovation labs from Singapore to Switzerland, they’re ensuring that the that the infrastructure of opportunity is a global standard and not just a local exception. Having this type of infrastructure in place is what will allow all of us to collectively achieve population scale change. That’s why we’re supporting change makers like Wadwani AI through Google .org grants that try to embed intelligence directly into the digital rails for millions of Indians to be able to use.

In healthcare, for example, this means empowering something like 1 .4 million frontline workers with multilingual AI assistance, providing early warnings to combat child malnutrition across the country. In agriculture, it means integrating AI into the national pest surveillance system to protect India’s most important crops at a national scale. And in education, it means integrating AI into the national pest surveillance system to protect India’s most important crops at a national scale. And in education, it means delivering high -quality learning experiences through AI -led transformation of government government -owned education and development platforms. And this is an initiative that’s already reached 10 million students and educators with the goal of empowering as many as 75 million students and nearly 2 million educators by the end of 2027.

Ultimately, to fully capture AI’s beneficial potential, we must be bold and responsible and be committed to building all of this together. We must pursue AI’s most ambitious possibilities while ensuring that we build the coordination layer necessary to bridge and close the AI divide. With that, it is now my great pleasure and honor to welcome an extraordinary group of incredible leaders and innovators to the stage. We’ve been doing this for an extraordinarily long time with incredible impact. First, I’d like to invite Nandan Nilikani. Nandan is the…

Nandan Nilekani

Thank you.

James Manyika

Nandod is the co -founder and chairman of Infosys. He’s a global leader in digital public infrastructure and the co -founder of Networks for Humanity, an initiative building open, interoperable digital infrastructure for the intelligence age. I should say I’ve known Nandod for a very long time. When he first told me what he was working on 15 years ago, I’m not quite sure I quite believed him, but here we are. Next, joining us is Sang -Boo Kim. Sang -Boo is the World Bank’s vice president. Sang -Boo is the World Bank’s vice president for digital and AI, leading efforts to drive digital economy growth in developing countries by strengthening infrastructure, cybersecurity, data privacy, while modernizing government services and also touching many areas like health, education, and more.

Our third guest… is Kiran Mamzouma -Shaw. As chairperson of Biocon Group, Kiran is a pioneering biotech… Kiran is a pioneering biotech entrepreneur, health care visionary, and a passionate philanthropist committed to expanding access to health care through affordable innovation. And finally, please welcome Sunil Wadhwani. Sunil is a visionary entrepreneur and philanthropist who co -founded the Wadhwani Institute for Artificial Intelligence to drive systematic and systemic social transformation through AI solutions and innovation in the public systems across health care, education, and agriculture. So we’re now going to have a conversation. I can’t wait to do a conversation with these extraordinary leaders. Thank you. Nanda, let me start with you. You’ve been championing digital decentralized ecosystems for a very long time, building open networks, taking things to extraordinary scale in India, and recently with Bakken and Finantech.

And obviously you bring a lot of credibility to both users of these systems and to regulatory bodies. How do you see AI as a multiplier or a factor as you think about open networks and the kind of transformational change you’ve been pursuing?

Nandan Nilekani

No, I think AI is very fundamental, and I’ll explain how open networks and AI come together. I think what some of us have been thinking about is if AI is a general purpose technology, what is the fastest way of diffusing the use of AI in a productive way for people? And, you know, ultimately all this, it only makes sense if you can do it. people’s lives improve. And I think we have a lot of experience with open networks. I mean, in some sense, UPI was an open network for payments, and the open architecture led to the massive growth and became the world’s largest payment system. So a lot of those principles are embedded in Beckon, and we have other examples.

But I think open networks allows many actors, many innovators to build applications on the edge using AI. And I think we keep talking about agents, but I think the real power of agents is in removing complexity for the user. So if a user is there who is a farmer or somebody who is producing a little bit of electricity, if they can very easily transact with somebody else through an agent, which is in their own language, then suddenly this is inclusion at massive scale. So I see really AI… agents on an open network as the fundamental construct for massive diffusion of technology.

James Manyika

And also the importance, as you mentioned, of doing that in languages, in local languages.

Nandan Nilekani

Oh, totally. I think you talked about what you’re doing at ISE. I think there are many initiatives in India which essentially are driving to make language completely accessible. Because language is not just pure language. I mean, the way Indians speak, they mix the English, Hindi, and Tamil in one sentence. So how do you deal with that? How do you recognize that? So I think all that is getting addressed. There are many initiatives, voice AI, there’s Bhashini of the government, there’s AI for Bharat, there’s the Google project. So I think there’s lots of stuff. But fundamentally, I think language as a barrier will go away. So if you combine language, so a person talks to the agent in their own language, and then the agent does some transaction with hiding all the complexity behind it, then, you know, that’s the holy grail.

We can get everybody on the system, and that’s how AI will get diffused.

James Manyika

And then speaking of, you mentioned farmers and agriculture. Sangbo, let me come to you. I mean, the World Bank recently launched AgriConnect initiative. First of all, I’d like you to describe that a little bit. I think it’s intended to make what smallholder farmers do much, much more efficient and scalable. But I’m curious, what has that work taught you so far about the type of global standards that are going to be needed to scale local solutions?

Sangbu Kim

So, if you just go to look about the AgriConnect in Uttar Pradesh for now, so it is a very farmer -oriented approach to provide very coherent and consistent services at the same time with open stack, open network. That means, if you think about the previous day, from the… computer innovation, mobile innovation, now we are seeing AI innovation. I would interpret this evolution from the supplier -oriented service environment to the customer, user -oriented environment. In that sense, some open standard and open network is a really crucial part to make sure user -centric service. So it is very efficient and affordable solutions for an AI era to fully provide quality of service to the user. In that sense, this AgriConnect project is really important, but it is not only for agriculture project.

With that, we are really looking forward to expanding to another sector like healthcare and education. So it can be a very… universal network in the future.

James Manyika

That’s pretty powerful. In fact, speaking of health care, you mentioned you’re taking this to health care. Kieran, you’ve been an incredible advocate and innovator when it comes to thinking about medicine as a whole. And you’ve talked about this idea that we need to move beyond the industry of medicine. And tell me, say more about what you have in mind about what we need to move to, and in particular, how you can connect what’s going on maybe with AI and data sets with fundamentally transforming medicine.

Kiran Mazumdar-Shaw

So I think I have to answer this in two parts. The first part is how do we basically leverage what Nandan refers to as the digital stack to a health stack. That is the first big opportunity we have. And I think India is a country that can uniquely create a global reference model when it comes to… The use of AI in the kind of… health data that we are collecting. So, for instance, India is beginning to collect a lot of health data in its health stack, and it’s phenotypic, it’s genomic, it’s demographic, and radiological data, and, of course, treatment and treatment outcome data. Now, when you start collecting this data, I think the whole objective, again, which is a holy grail, which is universal healthcare delivery at scale in a sustainable way, and how do we reduce the disease burden, I mean, and increase lifespan, all these are big challenges and a very complex set of solutions.

But I think this is a starting point where you get this huge digital stack of health data. And because India has this open source and the consent -based kind of secure data share, already established in UPI. I think we should quickly apply this to healthcare. And when you do that, you will start risk profiling your population at a demographic level, which I think is very exciting and at scale. And if you can integrate insurance into that, that will be even more powerful. That can only be done by AI. So AI has the opportunity to risk profile very fast, to try and find interesting insurance models, to see how we can marry the risk profile with the insurance instrument.

Not easy, but I think it’s a good challenge because AI can be given a lot of exclusion -inclusion criteria, which it can adopt. So I personally am very excited with what AI can do for health, digital health, and the whole universal healthcare delivery model. Additionally, of course, India has this unique model of ASHA work, because if ASHA work is not done, can be empowered with AI that is even more powerful. So I think you know deploying AI for the common people, the common man is very important to both Nandan and Samguth’s point of view. Now coming to my excitement about your second question about what is it that I’m looking beyond this. Beyond this I’m looking at advancing medicine using AI.

Now biology on its own was limited because it didn’t have the kind of power of technology to get deeper insights. AI like what you’ve just done, alpha fold, alpha genome is going to give it immeasurable opportunities to understand biology and to me biological intelligence is just amazing. And if you combine it with artificial intelligence and convert bring that convergence I think we are in for huge And when I look at biological intelligence, when I just look at cell biology, how cells signal, how cells create circuits, how cells regulate, how cells connect and disconnect. I mean, the human body, living systems, have distributed data centers. And these data centers are connecting, disconnecting with sips of energy, not gigawatts of energy.

And they’re actually translating that into instant information and decision making. If we can learn that and apply it to AI, I think it’s going to be transformational. I am really looking forward to reprogramming cells, right? That’s the holy grail. How do you convert a cancer cell into a non -malignant cell? How do you basically look at regenerative science? How do you look at lifespan? I mean, your biggest question today. Right. How do we shift from… from hospital -centric care to primary and community care. That can happen with AI, with predictive and preventive medicine. That’s, I think I’ve said enough. Yeah.

James Manyika

Well, it would actually take us, in effect, Kiran, it would probably take us from kind of treating diseases to preventing diseases. And I like it. You and I were talking earlier. You and I and Demis were talking about this idea of someday we should be able to try to build virtual cells, models of virtual cells, and be able to do kind of cell -based biology, basically.

Kiran Mazumdar-Shaw

Absolutely. That’s one of the exciting things. It’s very exciting. Yeah.

James Manyika

We’ll come back to that. But I want to come to you, Sunil, which is, you know, I’m curious, Sunil, as you think about what you’ve been doing, what role does DPI and open networks pay in developing and scaling the kinds of solutions to some of society’s most pressing problems? I mean, you’ve been thinking about this for a very long time. I mean, from way back. When you set up the AI. Institutes, way before most people were thinking about these things. But I want to hear what your perspectives and experiences have been.

Sunil Wadhwani

Thanks, James. Good morning. Just so we’re all clear, there’s a lot of intellectual horsepower on the stage, and it’s all on this side of me. I’m basically here for my good looks, so just so we manage expectations. But when I set up Vadbani AI back in 2018, and Prime Minister was good enough to come and inaugurate that, basically we had a huge benefit, which is the following, to your point, that over the last 20 years, the government of India has developed a set of DPI, Digital Public Infrastructure, that is broad and that is deep. And this DPI are basically digital building blocks that are going to be used to build a system of infrastructure that connect policy, program implementation, public service workers, and citizens.

in the country. And I’ll give you a couple of examples. But these, again, these digital public, this DPI provides two key, very practical, down -to -earth functions and benefits, as I see it. Number one, they provide data and data pipelines. And for AI, you couldn’t build AI for the social sector without the kind of data and the data pipelines that they provide. Secondly, this DPI provides distribution channels so that once your inference models are ready, these AI platforms, again, developed and managed by government, provide a distribution channel to get our AI models out at scale. Without these, trust me, the usage of any model in the public sector, in the social sector, would cost incredibly more and wouldn’t scale anywhere near what we see.

So, two quick examples. One in healthcare, one in… education. So one of the challenges, one of the national health priorities for the government of India for the last several years has been the elimination of tuberculosis, TB. It’s the largest infectious disease killer in the world, kills close to 2 million people a year. It’s the largest infectious disease killer in India, kills close to half a million people a year in India. And for each person that unfortunately dies, 20 other survivors live miserable lives and it impacts their life, their ability to earn a living. So the government asked us to come in and see what we could do. And we identified with the government what are the three or four key pain points in the patient’s journey.

First one is diagnosis and diagnosing TB in economically vulnerable communities isn’t easy. X -ray machines, sputum analysis, etc. These are all challenging, they’re expensive, they’re time -consuming, they’re tedious. So that’s challenge number one. Secondly, if you do sputum analysis, these have to go to 64 government labs around the country. There’s throughput time, and by the time the patient gets the results back, you’ve lost some time in initiating treatment for the people who have TB. And finally, there’s a huge problem that there’s a subset of TB patients who stop taking their medication because it’s a very toxic regimen of medications, which has a very toxic effect on the body. The problem is once you stop taking them, you develop drug -resistant TB, then the mortality rate goes up dramatically to 50%, and then you infect a lot more people and so on.

So fortunately, the government has a DPI called Nixia. It’s a very large data platform. It’s a patient management system that has data on all of the TB. They detected TB cases in the country. Government gave us access to that database. We developed a range of models. And… And to address these challenges that we saw in the patient care journey, for diagnosis, we’ve come up with a way of diagnosing TB from the sound of a cough into a smartphone. It’s instant. It’s quick. It shows you what the risk is of that patient having TB, and so government workers can focus on those patients. In the one year or so that this program has started getting rolled out, the diagnosis part using the sound of a cough, detection of TB patients has gone up by 25 % nationally.

You may think that’s bad news because now there’s more TB cases, but they were there. But now we can make sure they get treated. On the labs and the turnaround time, we’ve come up with an AI way to automate a lot of this testing, so now literally you get the results the same day. Patient and the doctor finds out instantaneously, and then they can start treatment. On the issue of patients who develop drug -resistant TB, we’ve come up with algorithms that predict. predict which TB patients are likely to fall off their medication regimen. And so the 2 ,000 or so TB caseworkers in the country, which is a small number for the 4 or 5 million TB patients that we have, but now they can target their time and bandwidth on their subset of patients that really needs help.

But all of that was enabled by this DPI called NICSHA, this database, which enables all of this. One other quick example in the education space. In the global south, including in India, there’s a very high dropout rate of students in early grades, grades 1 to 5. So we got a call from a large state government in India about a year back saying, can you help? We took a look at it, and it turns out that the single key reason for this very high dropout rate in early grades, grades 1 through 5, is the inability of these young children to read proficiently in that environment. And so we’re going to have to do a lot of research. if you can’t read properly it will affect how you do in all your subjects geography, history, science, everything you start failing you get frustrated, your parents say come back home, work on the farm work in the kitchen, etc.

and that affects the rest of their life so we’ve come up with a system to diagnose within 20 seconds for each child by just speaking into a phone into our model in 20 seconds we figure out exactly where they are struggling what words what phrases, what sentences and what will help them get on the right track and this is being done at the cost of 5 paise per student I think cost is another very big part of scaling that doesn’t get discussed too much but cost is very important so 5 paise per student while this was in pilot the suite of solutions we came up with so we’ve got a way like I said of assessing in 20 seconds where the patient where the student is struggling where the patient is struggling We come up with a diagnostic and then come up with a remediation plan and exercises for each student to practice at home to improve their reading.

The state was so impressed with the pilot, they made it mandatory for all 3 million kids of that age in school. Three or four other states, including the state of Rajasthan, just made it mandatory for all 8 million kids over there. And by the way, all of this, again, enabled by DPI. So Rajasthan has a state -level DPI called Rakshak. Our system, our models sit on top of that system. It reaches 400 ,000 schools, 8 million students, and it’s spreading. So now the government of India, by the end of next year, wants to make this standard across the country. All 75 million children of that age group will get their reading improved and strengthened through the systems that we have. Bottom line, all enabled by DPI.

James Manyika

No, I mean, those are very… APPLAUSE Thank you. Those are incredibly powerful examples. In fact, the case of TB is actually one that’s super important because something like 40 % of people in the world go undiagnosed with TB. And, in fact, most of them are in the global south. But it also just brings me back to maybe a question of scale. I mean, in what you’re doing, you mentioned a few countries, but I think your goal is to get with some of these education and health solutions to, like, 25 countries or more. How are you thinking about kind of taking that to kind of multi -country scale? And what are some of the ideas you have about how you do that?

Sangbu Kim

achieve the same you know academic goal within six week which usually takes more longer than a year long process so that’s one course one example in Nigeria also not only the TV some very small handheld ultrasound device can scan the the pregnant woman and then easily diagnosis some some problem for a baby and then it drastically reduced the birth that the baby death rate so it is one another example how can we scale this up another good example as you said we are expanding the current India model to other three African countries and in Brazil we added one more in Philippine okay and then the one of the way is that you how can you find some very standardized and scalable model, but this is not easy.

But from the one concrete example, like in India case for Agricultural Connect, we are figuring out what would be the best way and lighter model we can quickly replicate to other countries. This is our role. The World Bank is trying very hard to figure out what that means and then how they can really replicate this model to other countries. And what would be the really critical component to be replicated. So this is our role. So we are just working on what would be the best simple model is. So in that sense, usually I’m not sure it is really right analogy. I’m using this analogy as a sommelier. We are not innovation creator. There’s a bunch of really good wine producers, but I would say that this is a very good model.

customers is not really aware of which wine is really fit for their taste. So as a sommelier, the World Bank is trying very hard to understand the wines and then find some better, recommend better wines for our customers’ taste.

James Manyika

Yeah, I think on this question of scale, I mean, those are great ways to… Do you need any help on quality control with these wines? Yeah, but I think on this question of scale, I mean, Nandan, I’ve heard you say that, you know, we won’t get to true population scale unless we actually scale things like inference in AI and how we do that at massive scale. And I’m just curious for you to expand on that a bit more, but also what lessons and implications it might even have for people like us who are building frontier models. Say more about why the inference part of this really matters.

Nandan Nilekani

No, I think broadly speaking, I think, especially in the global sub… the cost of AI inference has to drop dramatically because if you’re serving a customer with one query and that costs, you know, 500 rupees or something, it’s not going to work. So we have to make it really, inference has to be, which I think, you know, you’ll do that because, I mean, there’s a lot of focus today on the training side, you know, getting bigger and bigger models and launching them. But I think as that sort of stabilizes, I think the focus will shift on the inference side to make inference cheap. But I think the, I’ll give you an example of, a very tangible example of open networks, even AgriConnect.

Yesterday I was talking to Demis. And Demis was saying that Google is improving its weather models. Yeah. They’re making it better, more efficient, more predictive, more granular in where it, you know, area by area and so on. Now, if you had an open network, if you had a network for Agri, like AgriConnect, which suppose it has millions of… then all we need to do is just plug in the latest weather model of Google into that open network, and suddenly 10 million farmers have access to the latest weather data. That’s a good example of why this open network thing is important, because it allows you to plug new models, new sources of capability, new ideas, and so on.

And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very interesting demo here of energy trading. Now, we never thought of energy as something that you traded because you bought it from the utility. But today, with millions of people producing energy, they’re able to – somebody has a rooftop solar, there’s some extra energy they can sell it to somebody else. But how does a farmer in UP learn how to sell energy? It’s a whole new concept. It’s only by an agency. It’s a classic commerce interface that is simple. So I think low -cost inference combined with – with agents that hide complexity is the key to massive diffusion.

James Manyika

Yeah, and in fact, I like the example you brought up on weather because in some ways, thanks to the, quite frankly, the forward thinking of part of the Indian government and the Ministry of Agriculture, they’ve set up that infrastructure. And in fact, last year, they used one of our models, Neural GCM, which predicts monsoons. And we’re actually able to deliver, I think, to something like 38 million Indian farmers predictions about the monsoons. But that only worked because the Indian government actually set up that kind of infrastructure where you could plug in these models. Kiran, I want to come back to you because, I mean, in some ways, you raise some more foundational questions here about the future of biology and health overall.

And I’ve heard you say, for example, that, in fact, AI doesn’t replace biology, that biology is much, much more fundamental and foundational. Say more about that. that and what can AI learn from biology and vice versa and what do you imagine what needs to happen to fully take advantage of?

Kiran Mazumdar-Shaw

Yeah so I think first and foremost biology works through distributed data centers okay and when it wants to build intelligence retrieve memory and inference data it does so with sips of energy not with gigawatts of power that our data centers. So we could learn something from biology. So I think we can learn something from that. I think more fundamental to that is that biology also has generational learning you know if you think about how our DNA stores generational memory I think that’s fascinating how does the Arctic turn fly out of its nest for the first time travel 70 ,000 kilometers to the Antioch and then travel to the Arctic and then travel to the Arctic and then back it has navigational intent embedded in its DNA.

How does that work? So I think we have to learn a lot from biology and use AI to learn that biology. Because without AI, you cannot have any insights into biology. So I just feel that the future is going to be about the convergence of biological intelligence and AI. And that is going to be a very powerful transformative process. Because biology has a lot to teach AI in terms of how to do it with less energy, how to do it rapidly, and how to multiplex multimodal data very rapidly. Now that is something which I think is very exciting. And I think to go back to what you’ve just been discussing with Nandan and others, I think what makes it very exciting right now is the volumes of data you can collect.

And I think that is the reason why we are so excited to be here. And I think that is the reason why we are so excited to be here. And I think that is the reason why we are so excited to be here. Thank you. on open networks. I think we have to talk about, I mean, I know I work with a lot of, you know, organizations around the world in my field. There’s a huge reluctance to share data. You know, there’s a lot of wariness about IP being, you know, fragmented. And therefore, I think, except for India, there’s a lot of this resistance to share data. Now, when you don’t share data, you’re going to silo it.

India has this unique opportunity because of its, you know, this open networks and the public digital infrastructure to share volumes of very important data like Nandan just illustrated in terms of the, you know, the environmental data that he was talking about, the climate data, and farmers taking huge advantage of that. That is what we have to really, really, really focus on because India. is uniquely positioned in terms of its open networks. And if we can actually keep generating data and then make

James Manyika

I’m being told that we’re going to have to wrap this up. But before we do that, though, I want to just see if we can do a quick lightning round, so to speak. I mean, this summit has been extraordinary. The example that India is setting for the world, quite frankly, is extraordinary. If you could say, each of you say, one thing you’d like to see happen in the next 12 months, particularly with this idea of open networks and change at population scale, what would that be? I don’t know who wants to go first.

Nandan Nilekani

I think I’d like to see massive diffusion where all these applications that are just rolling out on open networks reach millions of farmers and farmers in the world. I think that’s going to be a big deal. I think that’s going to be a big deal. I think that’s going to be a big deal. I think that’s going to be a big deal. I think that’s going to be a big deal. I think that’s going to be a big deal. I think that’s going to be a big deal. and so on, and actually show the world that AI is a force of good. I think we have an obligation to show

James Manyika

That’s good.

Kiran Mazumdar-Shaw

Yeah, I definitely want to see a sustainable standard of care, high -quality universal health care coming out of this AI effort and the health stack.

James Manyika

That’s preventative, presumably.

Kiran Mazumdar-Shaw

Absolutely. Diagnostic, preventative, predictive, and precision, because you can’t do away with treatment. But how do you basically stage it up front?

James Manyika

Sunil or Sangut?

Sunil Wadhwani

Yeah, so yesterday when the Prime Minister spoke at Bharat Mandapam, you know how he has been saying for years, make an India for the world. He said, in the age of AI, let’s develop in India, and let’s deliver to the world. So in our case, is just one little example at Vadvani AI. I’ve given you two or three examples of what we’ve done. But we’ve developed over 25 AI platforms in India in education, healthcare, agriculture, which are scaling up. What’s interesting is over the last year, we’ve had an incredible amount of incoming interest from governments throughout the global south, in Africa, Asia, and so on, who are hungry for these solutions. And they’re looking to India to provide these.

In fact, when PM Modi launched our institute back in 2018, you know, he was saying the U .S. is so far ahead, China is so far ahead. I said, Mr. Prime Minister, we can set the example in India for how AI can be used for societal transformation. No one else is doing that. We are showing how it can be done.

James Manyika

Samgul?

Sangbu Kim

So for the next 12 years, I really want to work more to disseminate the really good use cases to the world. To our country and people. One of the reasons is that… some big challenge for the people in developing world they do not clearly know what they can do with AI even though it can provide really much affordable and easiest way to expand their capability and productivity and intelligence so you know in a very easy way compared to the old old day so once again they can get to know oh this is a real important opportunity for them and then I can believe that they will really find some really good way to fully utilize this one in a very affordable way

James Manyika

no no thank you I mean it is what what I’m taking away is that it’s not just the example that India is setting for India and the world but also quite frankly the example that each of you is setting because all of you in your work and your organization your teams through your initiatives but also your and, quite frankly, insight. I’ve done a lot to show what leaders can do. So I appreciate the examples that you’re setting and the example that India is setting. Please join me in thanking my panelists here. Thank you, and I think with that we’ll draw to a close. Thank you.

Moderator

Just request you all to be seated 30 seconds more. First of all, could we just please have another round of applause for our esteemed panelists? Very insightful. Thank you very much for coming here. The true benefits of AI, the discussion shows, can only be realized when we build for everyone using open networks. Very insightful conversation. Thank you, James, for moderating it. And to further help drive population scale impact, we invite changemakers and researchers to apply for the two Google .org Impact Challenge. One is in AI for Science, one is for Government Innovation. There’s a QR code for you to learn more. And I encourage you all to visit us at Booths 3 and 4 in Hall 5 to see firsthand how Google AI is delivering a real -world impact.

And finally, I just request all the panelists to please join Center Stage for a photograph. Thank you, everyone. Thank you. Thank you. That was great. Thank you so much. I wish you that more this morning. No, no, no. For me personally, this is inspiring. Thank you. Thanks. Okay, all right. Okay. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (8)
Confirmedhigh

“The moderator framed artificial intelligence (AI) as a catalyst for population‑scale transformation in education, health and agriculture—provided coordination is built into the system.”

The knowledge base states that AI’s greatest value comes from its ability to create widespread transformation across key sectors when implemented through coordinated systems, confirming the moderator’s framing [S10].

Confirmedhigh

“He warned that the digital divide must not become an AI divide.”

Rajendra’s warning that without proper classification of digital public goods we risk an AI divide that could be more dangerous than the existing digital divide aligns with this warning [S82].

Confirmedmedium

“Real‑world impact requires expanding access from the outset.”

The knowledge base notes that impact is only possible when coordination is built into the system from the beginning, supporting the claim [S10].

Confirmedhigh

“A Gemini‑powered open network in Uttar Pradesh offers multilingual AI agents for credit and crop‑prediction to smallholder farmers.”

The Uttar Pradesh partnership with Google Cloud launches an open digital network for agriculture powered by Gemini and the Beckn protocol, delivering services to millions of farmers, confirming the pilot description [S86].

Confirmedmedium

“Language is a barrier in India, where code‑mixing of English, Hindi and regional tongues is common; initiatives such as Bhashini, AI for Bharat and Google’s Project Vani address this.”

The discussion highlighted India’s linguistic diversity and code-mixing as a critical challenge that requires sophisticated multilingual AI, matching the claim [S7].

Additional Contextmedium

“He cited AlphaFold’s breakthrough, whose freely available database now supports more than three million researchers in 190 countries.”

The knowledge base notes AlphaFold’s Nobel-prize-winning achievement and that it has modeled virtually every known protein, underscoring its global scientific impact, though it does not provide the specific usage statistics cited [S80].

Additional Contextlow

“Google’s partnership with the Indian Institute of Science on Project Vani, which has released speech data for over 100 Indic languages—including 20 previously undocumented digitally—demonstrates a concrete step toward linguistic inclusion.”

Google’s recent Bengaluru event showcased the Pathways Language Model and efforts to improve Indian-language data and combat AI bias, providing additional context to the Project Vani initiative [S85].

Additional Contextlow

“Digital public infrastructure (DPI) and open networks constitute the coordination layer that translates human intent into concrete action.”

A knowledge-base entry discusses the need for trusted, interoperable digital public infrastructure to support India’s AI ambitions, adding nuance to the description of DPI as a coordination layer [S33].

External Sources (89)
S1
A Digital Future for All (afternoon sessions) — – James Manyika – Senior VP, Google-Alphabet and Co-Chair of the Secretary-General’s High-level Advisory Body on Artific…
S2
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — Because we believe that AI’s true potential lies in its ability to deliver population -scale impact, transforming educat…
S3
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — -James Manyika: Senior Vice President, Google Alphabet
S4
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S5
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S6
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S7
AI for Social Good Using Technology to Create Real-World Impact — -Kiran Mazumdar-Shaw: Chairperson of Biocon Group; pioneering biotech entrepreneur, healthcare visionary, and philanthro…
S8
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event moderator or host introd…
S9
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Impact:This comment elevates the discussion from academic concepts to practical applications with profound implications …
S10
AI for Social Good Using Technology to Create Real-World Impact — The World Bank’s Sangbu Kim presented concrete examples of how locally successful solutions can achieve global scale. He…
S11
Panel 5 – Ensuring Digital Resilience: Linking Submarine Cables to Broader Resilience Goals — – Nomsa Muswai Mwayenga- Sangbu Kim – Yongbo Tang- Sangbu Kim
S12
S13
Keynote-Rishad Premji — -Mr. Nandan Nilekani: Role/Title: Not specified; Area of expertise: Artificial intelligence (described as pioneer and th…
S14
High Level Session 2: Digital Public Goods and Global Digital Cooperation — – **Nandan Nilekani** – Co-founder and chairman of Infosys Technologies Limited (participated online) Nandan Nilekani, …
S15
https://dig.watch/event/india-ai-impact-summit-2026/fireside-conversation-01 — Thank you so much, Mr. Sikka, for your profound and very interesting remarks. And of course, your work at VNI also exemp…
S16
Capacity Building in Digital Health — Well, so here is the, there’s a, that’s a spicy question, but let me, let me, let me handle it. Well, this is in the U ….
S17
AI for Social Good Using Technology to Create Real-World Impact — – James Manyika- Sunil Wadhwani – Sangbu Kim- Sunil Wadhwani
S18
https://app.faicon.ai/ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — And the heart of this blueprint began with our partnership with the government of Uttar Pradesh. There we piloted a Gemi…
S19
AI for agriculture Scaling Intelegence for food and climate resiliance — Maharashtra’s strategic approach represents a shift from pilot projects to population-scale implementation. The state’s …
S20
Keynote-Ankur Vora — Evidence:He explains: ‘Last month, Bill announced Horizon 1000 in partnership with OpenAI, the government of Rwanda, and…
S21
AI Meets Agriculture Building Food Security and Climate Resilien — India understands this very deeply. And under the visionary leadership of our Honorable Prime Minister Narendra Modi, In…
S22
Accelerating an Inclusive Energy Transition | IGF 2023 Open Forum #133 — Neil Yorke-Smith:Well, hello. Good afternoon, everybody. Or good morning from the Netherlands. It’s nice to be here and …
S23
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S24
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S25
Digital Public Infrastructure, Policy Harmonization, and Digital Cooperation — Nasir Shinkafi defines Digital Public Infrastructure (DPI) as comprising connectivity elements, platforms, and public-re…
S26
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S27
What policy levers can bridge the AI divide? — **Zimbabwe’s National Strategy**: Minister Mavetera outlined Zimbabwe’s approach, mentioning what appears to be a framew…
S28
Global Perspectives on Openness and Trust in AI — So we did a market study on AI and competition, and the report has been released recently, October 25. It’s available on…
S29
How Small AI Solutions Are Creating Big Social Change — The discussion aimed to showcase how “small AI” – efficient, contextually-appropriate AI models – can create meaningful …
S30
Internet Governance Forum 2024 — The EU AI Actemerged as a potential North Star for global AI governance, with its risk-based approach and emphasis on fu…
S31
Skilling and Education in AI — The conversation began with a Professor’s detailed analysis of four critical sectors where AI can drive substantial impa…
S32
Global AI Policy Framework: International Cooperation and Historical Perspectives — So until we figure out how to share data in a way that’s useful, but still respects privacy, and there are techniques fo…
S33
Building Indias Digital and Industrial Future with AI — “India, surely for the vast amount of experience and scale and heterogeneity that it has, offers excellent evidence on w…
S34
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S35
Building the Next Wave of AI_ Responsible Frameworks & Standards — The Moderator argues that India operates in contexts that most of the developing world shares – multilingual populations…
S36
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Digital Public Infrastructure (DPI) is defined as society-wide digital capabilities that are essential for citizens, ent…
S37
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S38
United Nations Office for Digital and Emerging Technologies — In his policy brief onA Global Digital Compact – an Open, Free and Secure Digital Future for All, the UN Secretary-Gener…
S39
The Power of the Commons: Digital Public Goods for a More Secure, Inclusive and Resilient World — Amandeep Gill emphasizes that digital public goods (DPGs) and digital public infrastructure (DPI) are vital for an equit…
S40
AI for Social Good Using Technology to Create Real-World Impact — “In that sense, some open standard and open network is a really crucial part to make sure user‑centric service.”[35]. “S…
S41
AI for Social Good Using Technology to Create Real-World Impact — I would interpret this evolution from the supplier-oriented service environment to the customer, user-oriented environme…
S42
Building the AI-Ready Future From Infrastructure to Skills — “And so we at AMD have a commitment to make both our hardware infrastructure and our software infrastructure to be based…
S43
Education meets AI — This aligns with the Sustainable Development Goals (SDGs) of Quality Education (SDG 4) and Reduced Inequalities (SDG 10)…
S44
How Small AI Solutions Are Creating Big Social Change — Household income, the inputs get better, and to see how they can access the markets and agriculture credit. Similar in h…
S45
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Chunggong acknowledges the significant positive potential of AI for social good, including improvements in healthcare de…
S46
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — 1. Establishing effective multi-stakeholder coordination platforms 3. Contextualising Policies and Technologies: 5. Pr…
S47
Building Population-Scale Digital Public Infrastructure for AI — To address this challenge, the Gates Foundation is investing in “scaling hubs” in Rwanda, Nigeria, Senegal, and soon Ken…
S48
Innovation Factory Pitching competition: Women entrepreneurs shaping the future — This comment shifted the conversation from software capabilities to hardware economics, revealing practical constraints …
S49
Building Indias Digital and Industrial Future with AI — Evidence:Unlike commercial solutions that involve patents, copyrights, and scaling fees, India’s DPI is offered as open …
S50
Building Indias Digital and Industrial Future with AI — India’s DPI Model as a Scalable Blueprint for the Global South Mansi notes that the World Bank recognizes India’s scale…
S51
AI for agriculture Scaling Intelegence for food and climate resiliance — It is being designed as a replicable public infrastructure model for India and the entire global south. In partnership w…
S52
The Innovation Beneath AI: The US-India Partnership powering the AI Era — -Energy Grid Transformation and Clean Power: Detailed exploration of how AI’s massive energy demands require “programmab…
S53
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Energy Grid Transformation and Clean Power: Detailed exploration of how AI’s massive energy demands require “programmabl…
S54
Open Internet Inclusive AI Unlocking Innovation for All — Because the traditional formats of consumer consumption, which is called search, or now Gemini, ChatGPT, et cetera, will…
S55
AI Without the Cost Rethinking Intelligence for a Constrained World — first of all thanks for the question and very good evening to all who have joined so we are in the space of unifying the…
S56
AWS scales AI with inference-focused systems — AI assistantsdeliver answers in seconds, but the process behind the scenes, called inference, is complex. Inference lets…
S57
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S58
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S59
Building Population-Scale Digital Public Infrastructure for AI — All of these types of things I would love to see in the health space, a personal health assistant. In low – and middle -…
S60
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Amandeep Singh Gill:Yes, if I may jump in quickly, I think building on Eileen’s point, I think the foundations are essen…
S61
AI for Social Good Using Technology to Create Real-World Impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S62
What policy levers can bridge the AI divide? — **Zimbabwe’s National Strategy**: Minister Mavetera outlined Zimbabwe’s approach, mentioning what appears to be a framew…
S63
AI for Social Good Using Technology to Create Real-World Impact — I think open networks allows many actors, many innovators to build applications on the edge using AI. And I think we kee…
S64
https://app.faicon.ai/ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — But I think open networks allows many actors, many innovators to build applications on the edge using AI. And I think we…
S65
Open Forum #67 Open-source AI as a Catalyst for Africa’s Digital Economy — Barbara Glover advocated for “problem-solving approaches through domain-specific education and applications addressing r…
S66
How Small AI Solutions Are Creating Big Social Change — Household income, the inputs get better, and to see how they can access the markets and agriculture credit. Similar in h…
S67
Democratizing AI: Open foundations and shared resources for global impact — **Climate and Agriculture**: Applications include weather prediction systems and plant disease detection tools for agric…
S68
Séance d’ouverture : « La gouvernance internationale du numérique et de l’IA : à la croisée des chemins ? » — Tomas Lamanauskas Merci beaucoup. J’ai quelques commentaires d’une certaine manière. Tout d’abord, je pense que, comme q…
S69
Knowledge in the Age of AI: World Economic Forum Town Hall Discussion — Several critical issues remain unaddressed:
S70
Global AI Policy Framework: International Cooperation and Historical Perspectives — So until we figure out how to share data in a way that’s useful, but still respects privacy, and there are techniques fo…
S71
Fireside Conversation: 02 — The discussion addresses India’s positioning in AI development, with the moderator referencing Prime Minister Modi’s sta…
S72
Building Indias Digital and Industrial Future with AI — “India, surely for the vast amount of experience and scale and heterogeneity that it has, offers excellent evidence on w…
S73
AI for agriculture Scaling Intelegence for food and climate resiliance — By creating interoperable networks based on open protocols like Beacon, by collaborating with each other, one of us is b…
S74
Building Indias Digital and Industrial Future with AI — Summary:All speakers acknowledge India’s leadership in DPI development and its potential for global replication, with em…
S75
Global telecommunication and AI standards development for all — India has been chosen to host the distinguished World Telecommunication Standardisation Assembly (WTSA 2024), set to tak…
S76
Keynote-Mukesh Dhirubhai Ambani — Moderator’s opening remarks
S77
Open Forum #33 Building an International AI Cooperation Ecosystem — Dai Wei: Distinguished guests, ladies and gentlemen, good day to you all. I’m delighted to join you in this United Natio…
S78
Smart Regulation Rightsizing Governance for the AI Revolution — The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but …
S79
Artificial intelligence (AI) – UN Security Council — Another critical area highlighted was the need forcreating inclusive platforms for global collaboration. This involves i…
S80
AI Governance Dialogue: Steering the future of AI — Development | Sociocultural Last year, the Nobel Prize for Chemistry was awarded to the developers of AlphaFold, an AI …
S81
High-Level Track Inaugural Leaders TalkX: Forging partnerships for purpose: advancing the digital for development landscape — Rae warns that the current digital divide will continue to widen unless governments take concrete steps to ensure broade…
S82
Dynamic Coalition Collaborative Session — Development | Economic | Infrastructure Rajendra warns that without proper classification of certain technologies as di…
S83
Keynote-Brad Smith — In his keynote address at the first AI summit in the Global South, Microsoft Vice Chair and President Brad Smith focused…
S84
AI advancements and digital divide discussed at Samsung event in Paris — Samsung unveiled its latest range of foldable devices, earbuds, and wearables at the Louvre in Paris, followed by a pane…
S85
Google’s efforts to enhance Indian language data and combat AI bias — On 28 June, Google held a developer event in Bengaluru to showcase its Pathways Language Model (PaLM) to Indian develope…
S86
Uttar Pradesh partners with Google Cloud to revolutionise agriculture with open digital network — The Government of Uttar Pradesh, a state in northern India, and Google Cloud havepartneredto launch a pioneering open ne…
S87
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Fuad Siddiqui: Thank you. Good morning. Yeah, I’m delighted to be here. And it’s always great to be back in Saudi. I …
S88
WS #462 Bridging the Compute Divide a Global Alliance for AI — Key lessons from GAVI included the importance of inclusive governance models, corrective mechanisms for historical inequ…
S89
Responsible AI for Shared Prosperity — And I hope the idea is spreading and growing. Thank you. Thank And then we need to do some further work on the models t…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
James Manyika
11 arguments153 words per minute2285 words891 seconds
Argument 1
Coordination layer that lets AI translate intent into real‑world action
EXPLANATION
Manyika argues that digital public infrastructure and open networks act as a coordination layer, enabling AI systems to convert human intentions into concrete actions on the ground. This layer is essential for delivering population‑scale impact across sectors.
EVIDENCE
He states that digital public infrastructure and open networks provide the coordination layer that allows AI to translate human intent into real-world action, citing India’s UPI and Bashini networks as examples of such infrastructure [20-21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Digital public infrastructure is highlighted as a central element for coordination in AI deployments, as discussed in [S21] and the emphasis on open digital public goods in [S14].
MAJOR DISCUSSION POINT
Role of coordination layer
AGREED WITH
Moderator, Sunil Wadhwani
Argument 2
Gemini‑powered multilingual agents for farmers; AI in pest surveillance and education reaching millions
EXPLANATION
Manyika describes a Gemini‑powered open network piloted in Uttar Pradesh that offers multilingual AI agents to farmers for credit, crop prediction and other services. He extends the impact to pest surveillance in agriculture and AI‑driven learning platforms that have already reached ten million students, with a target of 75 million by 2027.
EVIDENCE
He notes the pilot of a Gemini-powered open network for agriculture providing multilingual AI agents, and mentions AI integration into national pest surveillance and education platforms that have reached 10 million learners and aim for 75 million by 2027 [30-34][44-47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Gemini-powered open network pilot in Uttar Pradesh providing multilingual AI agents for farmers is described in [S18] and [S10]; the Mahavistar platform reaching over 2.5 million farmers with pest alerts is reported in [S19]; education reach of 10 million learners and a target of 75 million by 2027 is noted in [S7].
MAJOR DISCUSSION POINT
AI‑enabled agriculture and education
AGREED WITH
Nandan Nilekani, Kiran Mazumdar‑Shaw, Sunil Wadhwani
Argument 3
Project Vani provides free speech data for 100+ Indic languages to power multilingual AI
EXPLANATION
Manyika highlights Google’s collaboration with the Indian Institute of Science on Project Vani, which has released speech datasets covering more than 100 Indic languages, including 20 languages previously undocumented. This effort supports the development of multilingual AI services across India.
EVIDENCE
He explains that Project Vani has made speech data for over 100 Indic languages freely available, including 20 languages never recorded before, through the government’s Bashini mission [23-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Project Vani’s release of speech data for over 100 Indic languages, including 20 previously undocumented languages, is documented in [S10].
MAJOR DISCUSSION POINT
Language data for AI
Argument 4
Google .org grants fund open tools that facilitate data sharing and interoperable infrastructure
EXPLANATION
Manyika points to a $10 million Google.org grant that supports the Networks for Humanity Foundation in building universal tools such as FinInternet, asset tokenization platforms, and open network standards. These tools aim to create a global infrastructure for AI‑driven opportunity.
EVIDENCE
He mentions the $10 million Google.org grant announced last year that funds the Networks for Humanity Foundation to build universal tools for tomorrow, including FinInternet and open networks [39-40].
MAJOR DISCUSSION POINT
Funding open infrastructure
Argument 5
Indian AI blueprint is being localized for Brazil, Nigeria, Ethiopia, Kenya, etc.
EXPLANATION
Manyika states that the open‑network blueprint developed in India is being adapted for multiple countries across the Global South, demonstrating a scalable model for digital inclusion. He cites specific nations where the model is being rolled out.
EVIDENCE
He says the blueprint born in India is being localized across the globe to countries such as Brazil, Nigeria, Ethiopia, and Kenya [28-29].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The scaling of the Indian AI blueprint to Brazil, Nigeria, Ethiopia, Kenya and other countries is mentioned in [S10] and reinforced by the discussion of global digital public goods in [S14].
MAJOR DISCUSSION POINT
Global replication of Indian model
AGREED WITH
Sunil Wadhwani, Sang‑Boo Kim
Argument 6
AlphaFold demonstrates how AI can accelerate biological discovery and inspire future virtual‑cell models
EXPLANATION
Manyika references AlphaFold’s breakthrough in solving protein‑structure prediction, which has been widely adopted by researchers worldwide. He connects this success to future aspirations of creating virtual cell models with AI.
EVIDENCE
He notes that AlphaFold solved the 50-year protein-structure challenge and that its database is used by over 3 million researchers in 190 countries, illustrating AI’s power in biology [14-15]; later he mentions discussions about building virtual cells with AI [154-155].
MAJOR DISCUSSION POINT
AI in biology
Argument 7
AI‑driven multilingual assistance is empowering 1.4 million frontline health workers with early warnings on child malnutrition.
EXPLANATION
Manyika highlights a large‑scale deployment of AI agents that provide real‑time, language‑specific alerts to health workers, improving preventive care for vulnerable children.
EVIDENCE
He notes that in healthcare, AI is empowering 1.4 million frontline workers with multilingual assistance that provides early warnings to combat child malnutrition across the country [43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Deployment of AI assistance for 1.4 million frontline health workers with early malnutrition warnings is reported in [S7] and further referenced in [S10].
MAJOR DISCUSSION POINT
AI for frontline health empowerment
Argument 8
Integrating AI into the national pest‑surveillance system protects India’s most important crops at a national scale.
EXPLANATION
Manyika describes the use of AI to monitor and manage agricultural pests, thereby safeguarding staple crops and enhancing food security nationwide.
EVIDENCE
He states that in agriculture, AI is being integrated into the national pest surveillance system to protect India’s most important crops at a national scale [44-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Integration of AI into the national pest-surveillance system to protect staple crops is described in [S19] and highlighted in the broader discussion of AI-enabled pest alerts in [S7].
MAJOR DISCUSSION POINT
AI for agricultural pest management
Argument 9
AI‑enabled education platforms have already reached 10 million learners and aim to serve 75 million by 2027, transforming government‑owned education services.
EXPLANATION
Manyika points to a large‑scale AI‑driven learning initiative that is rapidly scaling, delivering high‑quality educational experiences to millions of students and teachers.
EVIDENCE
He reports that the AI-led transformation of government-owned education platforms has already reached 10 million students and educators, with a goal of empowering 75 million by the end of 2027 [46-47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-led education platforms reaching 10 million learners and targeting 75 million by 2027 are cited in [S7].
MAJOR DISCUSSION POINT
AI for education scaling
Argument 10
Open networks enable new market mechanisms such as peer‑to‑peer energy trading by simplifying transactions through AI agents.
EXPLANATION
Manyika illustrates how AI‑driven agents can turn complex energy‑trading concepts into simple commerce interfaces, allowing small producers like rooftop‑solar owners to sell excess power to others, thereby creating inclusive micro‑markets.
EVIDENCE
He describes a scenario where a farmer with rooftop solar can sell surplus energy to another party through an AI-driven agent, turning a previously unseen commerce concept into a simple transaction, highlighting the role of open networks in enabling this model [258-263].
MAJOR DISCUSSION POINT
AI‑enabled new economic models
Argument 11
Open networks allow rapid integration of improved AI models (e.g., weather forecasts) to reach millions of users instantly.
EXPLANATION
Manyika points out that when an open network like AgriConnect exists, the latest AI models—such as a more accurate weather prediction system—can be plugged in and immediately delivered to a massive user base, demonstrating the scalability of open‑network architectures.
EVIDENCE
He explains that with an open network for agriculture, plugging Google’s latest weather model into the system would instantly give about 10 million farmers access to the new forecasts, showing how open networks facilitate swift model deployment [250-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open networks enabling instant plugging of improved weather models for millions of farmers is highlighted in [S7].
MAJOR DISCUSSION POINT
Scalable AI model deployment via open networks
N
Nandan Nilekani
5 arguments181 words per minute881 words290 seconds
Argument 1
Open networks let many innovators build AI applications; agents hide complexity for users
EXPLANATION
Nilekani argues that open networks enable a multitude of innovators to develop AI‑driven applications, while AI agents simplify interactions for end‑users by handling language and transaction complexity. This model drives massive diffusion of technology.
EVIDENCE
He explains that open networks allow many actors to build applications on the edge using AI and that agents remove complexity for users, enabling farmers or small producers to transact in their own language at massive scale [79-82].
MAJOR DISCUSSION POINT
Open networks and AI agents
AGREED WITH
Sang‑Boo Kim, James Manyika
DISAGREED WITH
Sunil Wadhwani
Argument 2
AI agents empower smallholder farmers to compete and transact in their own language
EXPLANATION
Nilekani emphasizes that multilingual AI agents allow smallholder farmers to access credit, market information and other services in their native language, leveling the playing field with larger platforms. This inclusion is key to population‑scale impact.
EVIDENCE
He describes a scenario where a farmer can easily transact with another party through an agent speaking their own language, achieving massive inclusion [80-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multilingual AI agents for farmers in Uttar Pradesh are described in [S18] and [S10], illustrating how agents level the playing field for smallholders.
MAJOR DISCUSSION POINT
Empowering farmers
Argument 3
Inference cost must fall dramatically for AI to reach population scale
EXPLANATION
Nilekani stresses that the cost of AI inference must become extremely low for AI services to be affordable at scale, especially in developing markets. He predicts a shift of focus from model training to cheap inference.
EVIDENCE
He notes that serving a single query at a high cost (e.g., 500 rupees) is unsustainable, and calls for a dramatic reduction in inference costs as training becomes stable [246-249].
MAJOR DISCUSSION POINT
Cost of inference
AGREED WITH
James Manyika, Sunil Wadhwani
DISAGREED WITH
Sangbu Kim
Argument 4
Multilingual agents eliminate language barriers, making AI usable for all citizens
EXPLANATION
Nilekani reiterates that when users interact with AI agents in their native language, the system hides all underlying complexity, achieving the “holy grail” of universal inclusion. This removes language as a barrier to AI adoption.
EVIDENCE
He repeats that a person speaking to an agent in their own language, with the agent handling transactions, would bring everyone onto the system, describing it as the holy grail [94-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of multilingual AI to remove language barriers and enable universal access is discussed in [S7] and reinforced in [S10].
MAJOR DISCUSSION POINT
Language as barrier removal
Argument 5
Open digital public infrastructure enables novel economic models such as peer‑to‑peer energy trading, allowing small producers to sell excess solar power via AI agents.
EXPLANATION
Nandan illustrates how AI agents can simplify complex transactions, enabling individuals with rooftop solar to trade energy with others, thereby creating new inclusive markets.
EVIDENCE
He gives an example of energy trading where a farmer with rooftop solar can sell excess energy to another party through an AI-driven agent, turning a previously unseen commerce concept into a simple transaction [258-263].
MAJOR DISCUSSION POINT
AI‑enabled new market mechanisms
S
Sunil Wadhwani
7 arguments166 words per minute1463 words528 seconds
Argument 1
DPI supplies data pipelines and distribution channels essential for scaling AI models
EXPLANATION
Wadhwani explains that Digital Public Infrastructure (DPI) provides the data streams and delivery mechanisms needed for AI models to be trained and deployed at scale in the public sector. Without DPI, scaling AI solutions would be prohibitively costly.
EVIDENCE
He states that DPI offers data and data pipelines for AI, and also provides distribution channels that allow inference models to be deployed at scale, making public-sector AI usage affordable [170-174].
MAJOR DISCUSSION POINT
DPI as AI backbone
AGREED WITH
James Manyika, Moderator
DISAGREED WITH
Nandan Nilekani
Argument 2
AI‑driven TB diagnosis from cough sounds and rapid reading assessment for schoolchildren at scale
EXPLANATION
Wadhwani describes two concrete AI applications: a cough‑sound model that detects tuberculosis, increasing case detection by 25 % nationally, and a reading‑assessment tool that diagnoses literacy gaps in 20 seconds at a cost of five paise per child, now being rolled out to millions of students.
EVIDENCE
He details the TB solution that uses cough sounds to flag cases, raising detection by 25 % nationally, and the education tool that assesses reading ability in 20 seconds for 5 paise per student, now mandated for millions of children across several Indian states [196-205][210-214].
MAJOR DISCUSSION POINT
AI for health and education
AGREED WITH
James Manyika, Kiran Mazumdar‑Shaw
Argument 3
Demonstrated low‑cost scaling: 5 paise per student for reading diagnostics
EXPLANATION
Wadhwani highlights the ultra‑low cost of the AI‑based reading diagnostic, emphasizing that affordability is a critical factor for large‑scale deployment in low‑resource settings.
EVIDENCE
He notes that the reading assessment system operates at a cost of five paise per student, enabling rapid scaling to millions of learners [210-212].
MAJOR DISCUSSION POINT
Affordable AI scaling
AGREED WITH
Nandan Nilekani, James Manyika
Argument 4
DPI provides secure data pipelines that power AI solutions while respecting privacy
EXPLANATION
Wadhwani reiterates that DPI not only supplies data but does so through secure, consent‑based pipelines, ensuring privacy while enabling AI applications across health and education.
EVIDENCE
He mentions that DPI provides secure data pipelines and distribution channels essential for AI, emphasizing privacy-preserving data sharing [170-174].
MAJOR DISCUSSION POINT
Secure data pipelines
Argument 5
Growing demand from Global South governments to adopt AI platforms built on India’s DPI
EXPLANATION
Wadhwani observes a surge of interest from governments in Africa and Asia seeking to replicate India’s AI solutions, indicating that DPI‑based platforms are becoming a model for the Global South.
EVIDENCE
He reports an “incredible amount of incoming interest from governments throughout the global south… looking to India to provide these solutions” [321-324].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Interest from Global South governments to adopt India’s DPI-based AI solutions is noted in [S10] and the central role of digital public infrastructure in India is emphasized in [S21].
MAJOR DISCUSSION POINT
Global South demand
Argument 6
Multilingual AI assistance empowers frontline workers in health and other services
EXPLANATION
Wadhwani notes that multilingual AI tools are being used to support 1.4 million frontline health workers, delivering early warnings and assistance in local languages, thereby extending AI benefits to the most vulnerable.
EVIDENCE
He references empowering 1.4 million frontline workers with multilingual AI assistance for early warnings against child malnutrition [170-173].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI assistance for 1.4 million frontline health workers, providing early warnings in local languages, is mentioned in [S7] and [S10].
MAJOR DISCUSSION POINT
Frontline worker empowerment
Argument 7
More than 25 AI platforms across health, education and agriculture have been built on India’s DPI, demonstrating scalable solutions for the Global South.
EXPLANATION
Sunil highlights the breadth of AI applications that leverage digital public infrastructure, showing that a portfolio of platforms can be rapidly replicated to meet diverse development needs.
EVIDENCE
He mentions that they have developed over 25 AI platforms in India spanning education, healthcare and agriculture, and that there is growing interest from governments throughout the Global South to adopt these solutions [320-322].
MAJOR DISCUSSION POINT
Scalable DPI‑based AI platforms
AGREED WITH
James Manyika, Sang‑Boo Kim
S
Sangbu Kim
4 arguments126 words per minute598 words282 seconds
Argument 1
Open‑stack, user‑centric services are crucial for affordable AI solutions
EXPLANATION
Kim argues that open‑standard, user‑centric architectures are essential to deliver efficient and affordable AI services, especially as the ecosystem moves from supplier‑oriented to customer‑oriented models.
EVIDENCE
She states that open standards and open networks are crucial for user-centric, efficient, and affordable AI solutions, and that this approach is vital for the AI era [105-107].
MAJOR DISCUSSION POINT
User‑centric open stacks
AGREED WITH
Sang‑Boo Kim, James Manyika, Nandan Nilekani
Argument 2
AgriConnect improves farmer services and is being extended to health and education sectors
EXPLANATION
Kim describes AgriConnect as an open‑stack platform that provides coherent services to farmers and is being expanded to health and education, positioning it as a universal network for multiple sectors.
EVIDENCE
She explains that AgriConnect offers farmer-oriented services via an open network and that the project is being looked at for expansion into health and education, aiming to become a universal network [102-109].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AgriConnect’s farmer-oriented services and its planned expansion to health and education are described in [S10]; the Mahavistar platform’s cross-sectoral potential is highlighted in [S19].
MAJOR DISCUSSION POINT
AgriConnect expansion
Argument 3
Developing simple, replicable standards to export models like AgriConnect to other countries
EXPLANATION
Kim discusses the need to create lightweight, standardized models that can be quickly replicated in other nations, emphasizing the World Bank’s role in identifying and adapting the best components for global rollout.
EVIDENCE
She mentions the World Bank’s effort to find a simple, scalable model based on the Indian AgriConnect experience to replicate in other countries, using analogies of wine selection to illustrate standardization [230-236].
MAJOR DISCUSSION POINT
Standardization for replication
DISAGREED WITH
Nandan Nilekani
Argument 4
The AgriConnect open‑stack is being deliberately extended to health and education sectors, illustrating the potential of a universal network architecture.
EXPLANATION
Kim explains that the same open‑network principles powering AgriConnect are being adapted for other public services, showing how a single interoperable infrastructure can serve multiple domains.
EVIDENCE
She notes that while AgriConnect was initially farmer-oriented, the project is being looked at for expansion into health and education, aiming to become a universal network in the future [108-109].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-sectoral extension of the AgriConnect open-stack to health and education is highlighted in [S10].
MAJOR DISCUSSION POINT
Cross‑sectoral expansion of open networks
K
Kiran Mazumdar‑Shaw
7 arguments0 words per minute0 words1 seconds
Argument 1
A consent‑based, open health data stack can be built on India’s digital infrastructure
EXPLANATION
Mazumdar‑Shaw proposes leveraging India’s existing consent‑based digital infrastructure (e.g., UPI) to create an open health data stack, enabling rapid risk profiling and integration with insurance using AI.
EVIDENCE
She notes that India’s open, consent-based data sharing model, already established in UPI, should be applied to healthcare to enable demographic risk profiling and insurance integration, tasks that only AI can perform efficiently [122-124][124-129].
MAJOR DISCUSSION POINT
Open health data stack
AGREED WITH
James Manyika, Sunil Wadhwani
Argument 2
AI enables rapid risk‑profiling, preventive medicine, and integration with insurance for universal health care
EXPLANATION
She highlights AI’s capacity to quickly generate risk profiles at a population level, which can be combined with insurance products to advance universal, preventive healthcare.
EVIDENCE
She explains that AI can rapidly risk-profile populations, integrate with insurance, and apply exclusion-inclusion criteria, thereby supporting universal health care delivery [124-129].
MAJOR DISCUSSION POINT
AI for preventive health
Argument 3
Local‑language AI is essential for inclusive health delivery
EXPLANATION
Mazumdar‑Shaw stresses that delivering AI‑driven health services in local languages is critical for reaching the broader population, aligning with the points made by other panelists about language inclusion.
EVIDENCE
She affirms that deploying AI for the common person, especially in health, requires local-language capabilities, echoing the importance of language for inclusion [130-132].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of local-language AI for inclusive health services is emphasized in [S7] and reinforced in [S10].
MAJOR DISCUSSION POINT
Language in health AI
AGREED WITH
James Manyika, Nandan Nilekani, Sunil Wadhwani
Argument 4
Global reluctance to share data; India’s open networks enable secure, consent‑based data sharing
EXPLANATION
She observes worldwide hesitancy to share data due to IP concerns, but argues that India’s open, consent‑based digital infrastructure uniquely positions it to facilitate large‑scale data sharing for AI applications.
EVIDENCE
She notes widespread data-sharing reluctance, but points out India’s unique open network and public digital infrastructure that can securely share massive datasets, especially environmental and agricultural data [288-293].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Challenges of data sharing and India’s consent-based open network model for secure data exchange are discussed in [S14] and [S21].
MAJOR DISCUSSION POINT
Data sharing challenges
Argument 5
Biology’s low‑energy, distributed computing offers lessons for AI; convergence will transform medicine
EXPLANATION
Mazumdar‑Shaw argues that biological systems compute using minimal energy across distributed networks, providing a model for more efficient AI. She envisions a convergence of biological intelligence and AI to revolutionize medicine, including cell reprogramming and regenerative therapies.
EVIDENCE
She describes biology’s distributed data centers operating on sips of energy, generational learning in DNA, and suggests that AI can learn from these mechanisms, leading to transformative medical advances such as reprogramming cancer cells [271-280].
MAJOR DISCUSSION POINT
Bio‑AI convergence
Argument 6
AI dramatically accelerates biological discovery and drug development, as shown by tools like AlphaFold and AlphaGenome.
EXPLANATION
Kiran argues that AI‑enabled platforms such as AlphaFold provide unprecedented insight into protein structures and genomics, opening new avenues for rapid drug discovery and biomedical research.
EVIDENCE
She points to AlphaFold and “Alpha genome” as examples of AI giving immeasurable opportunities to understand biology and accelerate discovery [135-136].
MAJOR DISCUSSION POINT
AI‑driven biological research
Argument 7
AI can enable regenerative medicine and cell reprogramming, allowing conversion of cancer cells to non‑malignant forms and extending human lifespan.
EXPLANATION
Kiran envisions a future where AI learns from cellular signaling and distributed biological computation to reprogram cells, offering transformative therapies such as turning malignant cells benign and advancing longevity.
EVIDENCE
She describes the goal of reprogramming cancer cells, exploring regenerative science, and learning from biology’s low-energy distributed computing to create virtual cells and extend lifespan [141-145].
MAJOR DISCUSSION POINT
AI for regenerative medicine
M
Moderator
5 arguments121 words per minute338 words167 seconds
Argument 1
AI’s greatest promise is to deliver population‑scale impact across education, healthcare and agriculture.
EXPLANATION
The moderator states that the true potential of artificial intelligence lies in its ability to create large‑scale benefits for every citizen, transforming key sectors such as education, health and farming.
EVIDENCE
He opens the summit by declaring that AI’s potential is to deliver population-scale impact that transforms education, healthcare and agriculture for every citizen [1].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Population-scale impact of AI in education (10 million learners), health (frontline worker assistance), and agriculture (Mahavistar reaching millions) is illustrated in [S7] and [S19].
MAJOR DISCUSSION POINT
AI for population‑scale development
Argument 2
Realising AI impact requires a coordination layer built into digital public infrastructure.
EXPLANATION
The moderator argues that without coordinated systems embedded in the digital infrastructure, AI cannot be turned into actionable outcomes at scale.
EVIDENCE
He notes that impact is only possible when there is coordination built into the system and that open networks and digital public infrastructure can create a global, interoperable coordination rail powered by AI [2-3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for a coordination layer within digital public infrastructure is stressed in [S14] and the centrality of digital public infrastructure in India’s AI strategy is noted in [S21].
MAJOR DISCUSSION POINT
Need for coordination in AI deployment
AGREED WITH
James Manyika, Sunil Wadhwani
Argument 3
Open networks and digital public infrastructure enable a global interoperable coordination rail that translates intent into action across borders.
EXPLANATION
By linking open networks with AI, the moderator describes a mechanism that can turn human intent into concrete actions worldwide, fostering cross‑border collaboration.
EVIDENCE
He frames the summit’s purpose as exploring how open networks and digital public infrastructure can create a global, interoperable coordination rail powered by AI to translate intent into action across borders [3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open networks as a global coordination rail are described in [S14]; the role of digital public infrastructure in cross-border AI action is highlighted in [S21].
MAJOR DISCUSSION POINT
Open networks as coordination layer
Argument 4
Inclusive AI built on open networks is essential for everyone to benefit.
EXPLANATION
The moderator stresses that the benefits of AI can only be realised when the technology is built for all, leveraging open networks to ensure universal access.
EVIDENCE
At the close of the event he reiterates that the true benefits of AI can only be realized when we build for everyone using open networks [340].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on inclusive AI built on open networks is reflected in the discussion of digital public goods in [S14] and the broader narrative on open infrastructure in [S21].
MAJOR DISCUSSION POINT
Inclusive AI
Argument 5
Launching Google.org Impact Challenges for AI for Science and Government Innovation will catalyse further population‑scale solutions.
EXPLANATION
The moderator invites researchers and changemakers to apply for two Google.org Impact Challenges, positioning them as mechanisms to accelerate AI‑driven solutions for science and public sector innovation.
EVIDENCE
He announces the two Google.org Impact Challenges-one for AI for Science and one for Government Innovation-and encourages participants to learn more via a QR code [342-345].
MAJOR DISCUSSION POINT
Stimulating AI innovation through challenges
K
Kiran Mazumdar-Shaw
2 arguments145 words per minute1175 words485 seconds
Argument 1
AI‑driven health stack should deliver a sustainable, high‑quality universal health‑care system.
EXPLANATION
Kiran calls for the AI health ecosystem to produce a durable, affordable standard of care that reaches every citizen, emphasizing the need for universal health services powered by AI. She stresses that such a system must be sustainable over time and maintain high quality.
EVIDENCE
During the lightning-round she states, “I definitely want to see a sustainable standard of care, high-quality universal health care coming out of this AI effort and the health stack,” highlighting her vision for AI-enabled universal health care [311-314].
MAJOR DISCUSSION POINT
Universal, sustainable health care via AI
Argument 2
AI health solutions must be diagnostic, preventive, predictive and precision‑oriented to achieve universal care.
EXPLANATION
She expands on the components of the AI‑driven health system, insisting that it should provide early diagnostics, preventive interventions, predictive analytics, and precision treatments, rather than focusing solely on treatment after disease onset.
EVIDENCE
In the same lightning-round segment she adds, “Diagnostic, preventative, predictive, and precision, because you can’t do away with treatment,” outlining the four pillars she expects AI to support in health care [313-314].
MAJOR DISCUSSION POINT
Comprehensive AI health services
Agreements
Agreement Points
Open digital public infrastructure and open networks act as a coordination layer that enables AI to translate human intent into real‑world action at population scale.
Speakers: James Manyika, Moderator, Sunil Wadhwani
Coordination layer that lets AI translate intent into real‑world action Realising AI impact requires a coordination layer built into digital public infrastructure. DPI supplies data pipelines and distribution channels essential for scaling AI models
All three speakers stress that a coordination layer provided by digital public infrastructure or DPI is essential for turning AI insights into concrete actions for citizens, enabling population-scale impact. Manyika explicitly calls it a coordination layer that translates intent into action [20-21]; the Moderator notes that impact is only possible when coordination is built into the system [2-3]; Wadhwani explains that DPI provides the data pipelines and distribution channels that make large-scale AI deployment feasible [170-174].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with the IGF definition of Digital Public Infrastructure as a coordination layer for society-wide digital capabilities [S36], the UN’s call for common DPI frameworks as “digital building blocks” for inclusive services [S38], and observations that open standards and networks are crucial for user-centric AI services [S40]. Scaling to a billion users also requires open-internet infrastructure to keep costs low [S54].
Multilingual AI agents remove language barriers and simplify transactions, empowering farmers, frontline health workers and other citizens.
Speakers: James Manyika, Nandan Nilekani, Kiran Mazumdar‑Shaw, Sunil Wadhwani
Gemini‑powered multilingual agents for farmers; AI in pest surveillance and education reaching millions Open networks let many innovators build AI applications; agents hide complexity for users Local‑language AI is essential for inclusive health delivery Multilingual AI assistance empowers 1.4 million frontline workers with early warnings on child malnutrition
The panelists agree that AI agents operating in local languages are key to inclusion. Manyika describes Gemini-powered multilingual agents for farmers and AI-assisted health workers [30-31][43]; Nilekani emphasizes agents that hide complexity and work in users’ own language [80-82][94-95]; Kiran stresses that health AI must be delivered in local languages [130-132]; Wadhwani cites multilingual AI tools supporting 1.4 million health workers [43][170-173].
POLICY CONTEXT (KNOWLEDGE BASE)
Examples such as AgriConnect in Uttar Pradesh demonstrate how open standards and multilingual agents deliver farmer-oriented services and reduce language friction [S40][S41]. The AI-for-agriculture public-infrastructure model being rolled out across the Global South further supports this empowerment narrative [S51], while broader social-change case studies highlight similar impacts in health and education [S44].
The cost of AI inference must be dramatically reduced to achieve affordable, population‑scale deployment.
Speakers: Nandan Nilekani, James Manyika, Sunil Wadhwani
Inference cost must fall dramatically for AI to reach population scale Open‑network example shows low‑cost inference is key to massive diffusion Demonstrated low‑cost scaling: 5 paise per student for reading diagnostics
All three highlight affordability of inference as a prerequisite for scaling. Nilekani warns that high per-query costs are unsustainable and calls for cheap inference [246-249]; Manyika points to low-cost inference enabling new market mechanisms like energy trading [256-263]; Wadhwani demonstrates ultra-low cost AI services (5 paise per student) as proof of feasibility [210-212].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions on “AI without the cost” stress that inference dominates AI compute budgets and must be cut for constrained environments [S55]. AWS’s inference-focused infrastructure also underscores inference as the primary cost driver [S56]. Hardware-economics constraints and the need to serve billions of users reinforce the urgency of cost reduction [S48][S54].
India’s open‑network and DPI blueprint is being replicated internationally, demonstrating a scalable model for the Global South.
Speakers: James Manyika, Sunil Wadhwani, Sang‑Boo Kim
Indian AI blueprint is being localized for Brazil, Nigeria, Ethiopia, Kenya, etc. More than 25 AI platforms across health, education and agriculture have been built on India’s DPI, demonstrating scalable solutions for the Global South. AgriConnect is being expanded to health and education sectors and considered for replication in other countries.
The speakers concur that the Indian model is expanding abroad. Manyika notes the blueprint’s localisation to Brazil, Nigeria, Ethiopia and Kenya [28-29]; Wadhwani reports growing interest from Global South governments to adopt India-based AI platforms [321-324]; Kim describes efforts to adapt the AgriConnect open-stack for other sectors and countries [102-109][229-233].
POLICY CONTEXT (KNOWLEDGE BASE)
The World Bank and Indian ministries cite India’s open-protocol DPI as a proven, adaptable blueprint for other developing nations [S49][S50]. Gates Foundation “scaling hubs” leverage population-scale DPI to coordinate AI pilots across Africa, echoing India’s model [S47]. Policy-harmonisation frameworks further promote such cross-border replication [S46].
AI is being deployed at scale in health and education to improve outcomes and reach millions.
Speakers: James Manyika, Kiran Mazumdar‑Shaw, Sunil Wadhwani
AI‑driven education platforms have already reached 10 million learners and aim to serve 75 million by 2027 A consent‑based, open health data stack can be built on India’s digital infrastructure AI‑driven TB diagnosis from cough sounds and rapid reading assessment for schoolchildren at scale
All three emphasize large-scale AI applications in health and education. Manyika cites AI-led education reaching 10 million learners with a target of 75 million [46-47]; Kiran proposes an open health data stack for risk profiling and universal care [122-129]; Wadhwani details AI-based TB detection and a 5 paise reading diagnostic scaling to millions of students [196-205][210-214].
POLICY CONTEXT (KNOWLEDGE BASE)
AI-for-social-good initiatives link large-scale health and education deployments to Sustainable Development Goals, noting measurable improvements in outcomes [S43]. Field reports from Africa describe similar scaling of AI-enabled health and education services [S44], while governance analyses caution that equitable benefit sharing must be ensured [S45].
Open standards and open‑stack architectures are essential for user‑centric, affordable AI services across sectors.
Speakers: Sang‑Boo Kim, James Manyika, Nandan Nilekani
Open‑stack, user‑centric services are crucial for affordable AI solutions The success of these networks depend on a single fundamental principle. It must remain decentralized and open. Open networks let many innovators build AI applications; agents hide complexity for users
The panelists agree that openness and decentralisation are foundational. Kim stresses open-standard, user-centric designs for affordable AI [105-107][108-109]; Manyika repeats that networks must stay decentralized and open [35-37]; Nilekani notes that open networks enable many innovators and simplify user interaction [79-82].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources stress that open standards enable user-centric services and foster innovation, as seen in AgriConnect and other open-network projects [S40][S41]. AMD’s commitment to open-standard hardware and software further illustrates industry backing for open stacks [S42]. The UN’s digital compact also calls for open, free, and secure digital foundations, reinforcing this stance [S38].
Similar Viewpoints
Both emphasize that open digital infrastructure provides a coordination layer where AI agents can translate intent into action while simplifying user interaction, enabling massive diffusion of technology [20-21][79-82].
Speakers: James Manyika, Nandan Nilekani
Coordination layer that lets AI translate intent into real‑world action Open networks let many innovators build AI applications; agents hide complexity for users
Both highlight that affordable, scalable AI solutions in education are already reaching millions and can be expanded further, demonstrating the feasibility of low‑cost AI at population scale [46-47][210-212].
Speakers: James Manyika, Sunil Wadhwani
AI‑driven education platforms have already reached 10 million learners and aim to serve 75 million by 2027 Demonstrated low‑cost scaling: 5 paise per student for reading diagnostics
Both see AI as a tool to enhance health outcomes at scale, from risk profiling and preventive care to supporting frontline workers with multilingual assistance [122-129][43].
Speakers: Kiran Mazumdar‑Shaw, James Manyika
AI enables rapid risk‑profiling, preventive medicine, and integration with insurance for universal health care AI‑driven multilingual assistance empowers 1.4 million frontline workers with early warnings on child malnutrition
Unexpected Consensus
Convergence of biology and AI as a transformative pathway for medicine.
Speakers: James Manyika, Kiran Mazumdar‑Shaw
AlphaFold demonstrates how AI can accelerate biological discovery and inspire future virtual‑cell models Biology’s low‑energy, distributed computing offers lessons for AI; convergence will transform medicine
While Manyika references AlphaFold and the idea of building virtual cells with AI [14-15][154-155], Kiran discusses learning from biology’s low-energy distributed computation and the potential to reprogram cells, indicating a shared belief that biology-AI convergence will revolutionise medicine-a point not explicitly linked earlier in the discussion [271-280].
Peer‑to‑peer energy trading enabled by AI agents on open networks.
Speakers: James Manyika, Nandan Nilekani
Open‑network example shows low‑cost inference combined with agents that hide complexity is the key to massive diffusion Open digital networks enable new market mechanisms such as peer‑to‑peer energy trading via AI agents
Both speakers unexpectedly converge on the idea that AI agents on open networks can facilitate novel economic models like rooftop-solar energy trading, a specific use-case not previously highlighted in the broader discussion of agriculture or finance [258-263][256-263].
POLICY CONTEXT (KNOWLEDGE BASE)
US-India partnership documents describe programmable power grids and AI-driven peer-to-peer energy trading as key enablers for AI data-center demand, highlighting India’s Energy Stack as a model for open-network coordination [S52][S53].
Overall Assessment

The panel exhibits strong consensus that open digital public infrastructure, open‑network standards, and multilingual AI agents are foundational for delivering population‑scale impact across health, education, and agriculture. Affordability—especially low inference costs—and data pipelines are repeatedly cited as critical enablers. There is also broad agreement on replicating India’s model internationally and on AI’s role in improving health and education outcomes.

High consensus: most speakers align on the necessity of open, decentralized infrastructure, multilingual agents, and affordable scaling to achieve inclusive AI impact. This unified stance reinforces the strategic direction of leveraging DPI and open networks as global public goods, suggesting that policy and investment efforts should prioritize these elements to realize AI’s population‑scale promise.

Differences
Different Viewpoints
Primary driver for scaling AI solutions: inference cost vs open‑network model integration
Speakers: Nandan Nilekani, James Manyika
Inference cost must fall dramatically for AI to reach population scale Open networks enable rapid integration of improved AI models (e.g., weather forecasts) to reach millions of users instantly
Nandan stresses that without a dramatic reduction in AI inference cost, services will remain unaffordable at scale, urging a shift of focus from model training to cheap inference [246-249]. James counters by highlighting that the existence of open networks allows new, better models (such as a weather forecast) to be plugged in and instantly reach millions of users, implying that model integration, not just cost, is the key scaling lever [250-254].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension mirrors policy debates where cost-constrained AI emphasizes inference reduction as the main scaling barrier [S55][S56], while other frameworks argue that integration with open-network DPI models is equally critical for affordable, large-scale deployment [S36][S40].
Which digital foundation is most essential: open networks vs Digital Public Infrastructure (DPI)
Speakers: Nandan Nilekani, Sunil Wadhwani
Open networks let many innovators build AI applications; agents hide complexity for users DPI supplies data pipelines and distribution channels essential for scaling AI models
Nandan argues that open, decentralized networks are the coordination layer that lets innovators create AI agents which simplify transactions for end-users [79-82]. Sunil emphasizes that DPI provides the underlying data streams and secure distribution channels that make AI deployment affordable and scalable in the public sector [170-174]. The two positions highlight a tension between viewing open networks as the primary catalyst versus seeing DPI as the essential backbone.
POLICY CONTEXT (KNOWLEDGE BASE)
IGF and UN documents define DPI as the foundational digital capability for societies [S36][S38], whereas AI-for-social-good literature highlights open networks as the coordination layer that makes user-centric AI possible [S40][S54]. The debate reflects differing policy emphases on the same ecosystem.
Approach to global replication: simple, lightweight standards vs cost‑driven inference reduction
Speakers: Sangbu Kim, Nandan Nilekani
Developing simple, replicable standards to export models like AgriConnect to other countries Inference cost must fall dramatically for AI to reach population scale
Sangbu stresses the need to create lightweight, standardized models that can be quickly replicated across countries, focusing on standardization and simplicity as the bottleneck [230-236]. Nandan, by contrast, points to the high per-query cost of inference as the main barrier to diffusion, suggesting that without cheaper inference the standardized models cannot be widely adopted [246-249].
POLICY CONTEXT (KNOWLEDGE BASE)
Replication strategies discussed in AI-for-social-good emphasize lightweight, open standards for rapid adoption across borders [S40][S48], while cost-focused AI policy papers argue that inference efficiency is the decisive factor for scalable impact [S55][S56].
Unexpected Differences
Attitudes toward data sharing: global reluctance vs perceived demand for Indian DPI‑based solutions
Speakers: Kiran Mazumdar‑Shaw, Sunil Wadhwani
Global reluctance to share data; India’s open networks enable secure, consent‑based data sharing Growing demand from Global South governments to adopt AI platforms built on India’s DPI
Kiran notes a widespread hesitancy to share data internationally due to IP concerns, positioning India’s consent-based open network as a rare solution [288-293]. Sunil, however, reports an “incredible amount of incoming interest” from Global South governments eager to adopt AI platforms built on India’s DPI, suggesting a readiness to engage with Indian data ecosystems [321-324]. The contrast between perceived global data-sharing resistance and actual demand for Indian DPI-based tools is unexpected.
Overall Assessment

The panel largely shares a common vision of using AI at population scale to transform health, education, and agriculture. Disagreements surface around which technical foundation is most critical—whether cheap inference costs, open‑network integration, or DPI‑driven data pipelines—and around the primary bottleneck for global replication (standardization versus inference affordability). A surprising tension appears in views on data‑sharing attitudes versus demand for Indian DPI solutions.

Moderate. While there is broad consensus on goals, the speakers diverge on the strategic levers needed to achieve them. These differences could shape policy and investment priorities, influencing whether future efforts focus on reducing inference costs, building open‑network standards, or strengthening DPI ecosystems.

Partial Agreements
All speakers concur that AI can deliver population‑scale impact in health, education, and agriculture, but they diverge on the mechanism: James emphasizes a coordination layer provided by digital public infrastructure and open networks [20-21]; Nandan highlights open networks enabling innovators and multilingual agents [79-82]; Sunil points to DPI’s data pipelines and distribution channels as the backbone [170-174]; Kiran proposes leveraging India’s consent‑based digital infrastructure for an open health data stack [122-124]; Sangbu stresses open‑stack, user‑centric standards for affordable AI services [105-107].
Speakers: James Manyika, Nandan Nilekani, Sunil Wadhwani, Kiran Mazumdar‑Shaw, Sangbu Kim
Coordination layer that lets AI translate intent into real‑world action Open networks let many innovators build AI applications; agents hide complexity for users DPI supplies data pipelines and distribution channels essential for scaling AI models A consent‑based, open health data stack can be built on India’s digital infrastructure Open‑stack, user‑centric services are crucial for affordable AI solutions
Takeaways
Key takeaways
Open Digital Public Infrastructure (DPI) and open, interoperable networks are essential coordination layers that enable AI to translate human intent into real‑world actions at population scale. AI acts as a multiplier across agriculture, health, and education when embedded in multilingual, low‑cost agents that hide complexity for end‑users such as farmers, frontline health workers, and students. Language localization is critical; initiatives like Project Vani and multilingual agents remove language barriers and broaden AI accessibility. The cost of AI inference must drop dramatically for widespread adoption; open networks allow new, cheaper models (e.g., weather forecasts) to be plugged in for millions of users. Secure, consent‑based data pipelines provided by DPI are the backbone for scaling AI solutions while respecting privacy. India’s DPI‑based AI blueprint is being replicated in other countries (Brazil, Nigeria, Ethiopia, Kenya, etc.), demonstrating a path toward global standards and replication. Convergence of biological intelligence and AI promises transformative advances in medicine, from predictive health to virtual‑cell modeling. Public‑sector partnerships (Google‑World Bank, Wadhwani Institute, Networks for Humanity) and grant mechanisms (Google.org) are driving the development and scaling of open AI tools.
Resolutions and action items
Commit to continue expanding multilingual AI agents on open networks for farmers, health workers, and educators. Scale the AI‑driven TB diagnosis and reading‑assessment pilots nationally and to additional states/countries, leveraging DPI data platforms. Pursue the development of a consent‑based, open health data stack built on India’s DPI to enable risk profiling and insurance integration. Work with the World Bank to define simple, replicable standards for AgriConnect and other sector‑agnostic open‑stack solutions for export to the Global South. Encourage researchers and changemakers to apply for the Google.org Impact Challenges (AI for Science and Government Innovation). Maintain and fund the Networks for Humanity Foundation to develop universal open‑network tools (FinInternet, asset tokenization, etc.).
Unresolved issues
How to achieve a dramatic reduction in AI inference costs while maintaining model quality and privacy safeguards. Establishing globally accepted standards for open‑network APIs and data formats that can be adopted across diverse regulatory environments. Overcoming reluctance to share data internationally; mechanisms for secure, consent‑based data sharing beyond India remain to be defined. Ensuring that AI agents can reliably handle code‑mixed language inputs (e.g., Hindi‑English‑Tamil blends) at scale. Operationalizing AI‑enabled insurance products linked to health risk profiling in low‑resource settings. Long‑term governance models for decentralized, open networks that balance openness with accountability and security.
Suggested compromises
Adopt an open‑network architecture that remains decentralized and interoperable while embedding consent‑based privacy controls to address data‑sharing concerns. Focus on low‑cost inference solutions (e.g., model compression, edge computing) as a compromise between high‑performance models and affordability for mass deployment. Leverage existing Indian DPI (e.g., UPI, NIXI) as a testbed for standards, then iteratively adapt them for other countries, balancing global standardization with local customization.
Thought Provoking Comments
AlphaFold … solved the 50‑year grand challenge of protein structure prediction. The freely available AlphaFold protein database has been used by more than 3 million researchers in over 190 countries, with India the fourth largest adopter.
Illustrates how open scientific resources can accelerate global research and create tangible impact at scale, setting a concrete example of AI’s transformative power.
Established a benchmark for open AI tools, prompting other panelists to reference open data (e.g., Nandan’s language initiatives) and framing the discussion around the importance of freely shared AI outputs.
Speaker: James Manyika
AI agents on an open network are the fundamental construct for massive diffusion of technology. They remove complexity for the user, allowing a farmer or small electricity producer to transact in their own language.
Connects the abstract concept of AI as a general‑purpose technology to a practical mechanism—agents—that can democratize access across diverse, multilingual populations.
Shifted the conversation toward user‑centric design and language inclusion, leading James to highlight language barriers and prompting further discussion on multilingual AI infrastructure.
Speaker: Nandan Nilekani
The cost of AI inference has to drop dramatically. If serving a single query costs hundreds of rupees, it won’t work at population scale. We need cheap, low‑cost inference combined with agents that hide complexity.
Identifies a critical bottleneck—affordable inference—that is often overlooked in hype around larger models, emphasizing economic feasibility for large‑scale deployment.
Prompted a deeper dive into scalability, influencing Sunil’s examples of low‑cost AI solutions (5 paise per student) and reinforcing the need for efficient infrastructure.
Speaker: Nandan Nilekani
We diagnosed TB from the sound of a cough using a smartphone, increasing detection by 25 % nationally, and built AI models that predict which patients will drop off medication, all enabled by the NIXA data platform.
Provides a concrete, high‑impact case where digital public infrastructure (DPI) and AI directly improve health outcomes, illustrating the practical value of the discussed concepts.
Served as a turning point from abstract discussion to real‑world results, reinforcing the argument for DPI and inspiring other participants to cite similar scalable interventions.
Speaker: Sunil Wadhwani
AI can risk‑profile populations at a demographic level, integrate with insurance, and empower ASHA health workers—turning India’s health stack into a universal, preventive care system.
Links AI capabilities to systemic health reforms, highlighting how data, risk modeling, and frontline worker empowerment can create a sustainable universal healthcare model.
Expanded the conversation from isolated AI tools to systemic health ecosystem design, prompting James to ask about biology‑AI convergence and reinforcing the theme of AI‑enabled public services.
Speaker: Kiran Mazumdar‑Shaw
Biology operates as distributed data centers using sips of energy, with generational learning encoded in DNA. AI should learn from this low‑energy, multimodal processing, and the convergence of biological and artificial intelligence will be transformational.
Introduces a visionary perspective that flips the usual AI‑dominant narrative, suggesting biology as a model for efficient, scalable intelligence.
Elevated the discussion to a speculative, interdisciplinary level, inspiring participants to consider cross‑domain learning and setting up the final “lightning‑round” aspirations.
Speaker: Kiran Mazumdar‑Shaw
AgriConnect is built on an open stack and network, making it a user‑centric service that can be replicated in other sectors like health and education, and across countries by finding a simple, standardized model.
Frames open networks as a universal platform for sectoral innovation, emphasizing replicability and standardization as keys to global scaling.
Guided the dialogue toward multi‑country scalability, leading James to ask about multi‑country rollout strategies and prompting panelists to articulate concrete expansion goals.
Speaker: Sang‑Boo Kim
In the next 12 months I’d like to see massive diffusion where AI applications on open networks reach millions of farmers worldwide, demonstrating AI as a force for good.
Summarizes the core mission of the summit—population‑scale impact—and sets a clear, measurable ambition that aligns all participants.
Concluded the discussion with a unifying call to action, reinforcing earlier points about scalability, affordability, and open infrastructure.
Speaker: Nandan Nilekani (lightning‑round)
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the conversation from high‑level vision to concrete, scalable solutions. James Manyika’s opening example of AlphaFold established the power of open AI resources, which Nandan Nilekani expanded into a user‑centric model of AI agents and highlighted the economic necessity of cheap inference. Sunil Wadhwani’s real‑world TB and education pilots demonstrated how Digital Public Infrastructure can operationalize these ideas, while Kiran Mazumdar‑Shaw linked AI to systemic health transformation and introduced a bold biological analogy that broadened the conceptual horizon. Sang‑Boo Kim reinforced the need for standardised, replicable open networks across sectors and geographies. Together, these comments created a logical progression: from the promise of open AI, through the mechanisms of agents and infrastructure, to tangible health and education impacts, and finally to a shared vision for rapid, population‑scale diffusion. This sequence shaped the summit’s narrative, aligning all participants around the central thesis that open, affordable, and interoperable AI infrastructure is essential for global, inclusive progress.

Follow-up Questions
What global standards are needed to scale local AI solutions for smallholder farmers and other sectors?
Understanding the necessary interoperable standards is crucial for replicating successful pilots like AgriConnect across different countries and contexts.
Speaker: James Manyika
How can AI and health data stacks be connected to fundamentally transform medicine and achieve universal, preventive, and precision healthcare?
Integrating diverse health data (phenotypic, genomic, demographic, radiological) with AI could enable risk profiling, insurance models, and large‑scale preventive care, but requires a clear roadmap.
Speaker: James Manyika
What role do Digital Public Infrastructure (DPI) and open networks play in developing and scaling AI‑driven solutions for societal challenges such as health, education, and agriculture?
Clarifying how DPI provides data pipelines and distribution channels is essential to replicate impact at population scale.
Speaker: James Manyika
How can successful AI solutions (e.g., TB diagnosis, reading‑assessment tools) be scaled to multiple countries while maintaining effectiveness and affordability?
Identifying replication strategies, partnership models, and localization requirements is needed to extend India‑originated pilots to the Global South.
Speaker: James Manyika
Why is cheap AI inference critical for population‑scale impact, and what lessons does this hold for developers of frontier models?
Inference cost determines whether AI services can be offered affordably to billions; insights are needed on model optimization, hardware, and open‑network integration.
Speaker: James Manyika
How can AI agents effectively handle multilingual and code‑switching language inputs common in Indian contexts?
Addressing mixed‑language utterances is vital for inclusive AI agents that serve farmers, energy producers, and other users in their native linguistic styles.
Speaker: Nandan Nilekani
What data‑sharing frameworks and governance models are needed to overcome IP reluctance and enable open networks for health, agriculture, and other sectors?
Creating trusted, consent‑based mechanisms for sharing large health and agricultural datasets is a prerequisite for scaling AI solutions while protecting privacy and IP.
Speaker: Kiran Mazumdar‑Shaw
How can AI learn from biological distributed intelligence to improve energy efficiency, multimodal processing, and generational learning?
Studying cellular information processing could inspire low‑energy AI architectures and new learning paradigms, a research direction at the intersection of biology and AI.
Speaker: Kiran Mazumdar‑Shaw
How can AI‑driven risk profiling be integrated with insurance instruments to create sustainable health financing models?
Linking demographic risk scores to insurance products could improve coverage and affordability, but requires policy, actuarial, and technical research.
Speaker: Kiran Mazumdar‑Shaw
What are the key components of a standardized, lightweight AI model (e.g., for AgriConnect) that can be quickly replicated across diverse countries?
Identifying minimal viable features, data requirements, and open‑stack interfaces will facilitate rapid deployment in varied regulatory and infrastructural environments.
Speaker: Sang‑Boo Kim
How can quality control and standardization be ensured when scaling AI solutions globally, analogous to selecting the right ‘wine’ for each market?
Developing metrics and validation frameworks is needed to match AI services to local needs and maintain performance across deployments.
Speaker: Sang‑Boo Kim
How can the cost of AI‑enabled services (e.g., 5 paise per student) be kept ultra‑low while scaling to millions of users?
Research into frugal AI models, edge deployment, and efficient data pipelines is required to sustain low per‑user costs at scale.
Speaker: Sunil Wadhwani
What mechanisms are needed to embed AI directly into digital rails (e.g., multilingual assistance for frontline workers) to reach millions of citizens?
Designing APIs, integration layers, and governance for AI agents within existing DPI will enable seamless, large‑scale delivery of AI services.
Speaker: Sunil Wadhwani

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI That Empowers Safety Growth and Social Inclusion in Action

AI That Empowers Safety Growth and Social Inclusion in Action

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by highlighting the urgent, day-to-day challenges of AI and the need for global standards, public-private collaboration and rights-based approaches to achieve responsible AI with real-world impact [1][2]. Speakers stressed that effective AI governance requires careful deliberation, stakeholder engagement and the sharing of good practices by companies to avoid pitfalls [4-6]. They affirmed that companies must respect human rights through due-diligence while governments should create a level playing field and incentivise responsible behaviour [9-13].


UNESCO emphasized that trust in AI is built through design choices, safeguards and accountability, and introduced its Readiness Assessment Methodology Reports (RAMS) that map regional AI landscapes in over 80 countries [32-34]. To translate the UNESCO ethics recommendation into practice, a massive open online course (MOOC) on AI ethics by design is being launched, teaching learners to embed fairness, transparency and inclusion early in the development cycle [36-42].


The newly-mandated UN Global Dialogue on AI Governance identified four priority areas: safe and trustworthy systems, closing capacity gaps, cross-border governance and anchoring AI in human-rights law [66-73]. Representing India’s tech sector, NASSCOM described its 2021-initiated mission to build open assets, develop capacity across government, startups and SMEs, and promote responsible AI adoption throughout the ecosystem [94-105]. Google outlined its corporate policy that commits to the UN Guiding Principles and UNESCO/OECD frameworks, embedding these values in AI principles and operational processes across product teams [130-138]. Microsoft recounted the evolution of its Office of Responsible AI since 2018, the Sensitive Use Case program and ITER ethics committee, and noted that its work is informed by OECD principles and UNESCO recommendations [169-184]. Externally, Microsoft cited voluntary commitments from AI summits, OECD hyper-reporting tools and recent Indian multilingual safety initiatives that reinforce inclusion and risk-based testing [188-200].


The World Benchmarking Alliance reported that while many firms publish AI principles, only a small fraction meet global governance standards or disclose human-rights impact assessments, underscoring the need for stronger incentives and board-level oversight [224-229]. It recommended that investors demand clear AI governance at the board level, concrete product-level implementation and robust human-rights impact assessments to close existing gaps [236-241]. Across the discussion, participants agreed that collaborative, multi-stakeholder engagement-spanning companies, civil society, academia and regulators-is essential to move from good intentions to actionable, inclusive AI systems [311-345].


Keypoints


Major discussion points


Global norms and multi-stakeholder governance are essential for responsible AI.


The opening remarks stress that “global standards, collaborative public-private solutions, and rights-based approaches can enable responsible AI” and that both companies and governments must create “clear rules… and alignment around the global norms” [2][9-11][13]. The UN-mandated Global Dialogue on AI Governance highlights four member-state priorities – trustworthy AI, capacity-building, cross-border governance, and anchoring AI in human rights – and frames standards as the bridge from principle to practice [67-74][78].


Capacity-building and education tools are needed to translate standards into everyday practice.


UNESCO’s RAMS assessments and the new massive open online course (MOOC) on AI ethics are presented as concrete ways to move “beyond theory and towards this responsible human-centred deployment of AI” [32-38][39-44]. LG’s contribution echoes this by stressing a “practitioner-focused” MOOC that bridges the gap between abstract standards and day-to-day work [209-213].


Companies are implementing layered internal governance to embed responsible AI.


Google describes a hierarchy of model-level requirements, application-level guardrails, executive review, and post-launch monitoring [149-162]. Microsoft outlines its Office of Responsible AI, the Sensitive Use Case program, and board-level oversight, all built on UN-based principles [168-184]. LG and Google also note programmatic stakeholder engagement, trusted-tester schemes, and open-source tools for language inclusion [300-307][308-310].


Investors and benchmarking bodies can drive accountability through market incentives.


The World Benchmarking Alliance reports that only ~10 % of the 2,000 assessed tech firms meet global governance expectations and none disclose human-rights impact assessments, underscoring the need for “board-level responsibility, aligned executive incentives, and robust AI-specific impact assessments” [226-241].


Inclusion-especially linguistic and cultural diversity-and civil-society partnership are critical gaps.


Participants point to the “language issue” and the need for multilingual safety tools, citing Microsoft’s community-led benchmarks in India and LG’s annual transparency report as examples of collaborative, culturally-aware practice [188-199][276-285][290-296].


Overall purpose / goal


The session is a convening of UN bodies, industry leaders, and civil-society representatives to share concrete practices, highlight gaps, and mobilise coordinated action so that AI development and deployment are governed by human-rights-based standards, are inclusive, and deliver real-world benefits across all economies [1][15][24][85-88].


Overall tone


Opening (0-5 min): Formal, optimistic, and forward-looking, emphasizing shared responsibility and the promise of standards [1-5][9-13].


Middle (5-30 min): Becomes more technical and candid as speakers detail specific tools, internal processes, and the challenges of scaling responsible practices [32-44][149-162][168-184][220-241]. The tone shifts to a problem-solving mode, acknowledging “obstacles” and “gaps” while showcasing concrete initiatives.


Closing (30-50 min): Reflective and motivational, urging continued dialogue, broader participation, and concrete action, ending on a call-to-action for all stakeholders [316-345][350-363].


Overall, the discussion moves from a high-level, collaborative framing to detailed, sometimes critical examinations of implementation, and finishes with a unifying, inspirational appeal to sustain momentum.


Speakers

Rein Tammsaar – Permanent Representative of Estonia; co-chair of the United Nations Global Dialogue on AI Governance. [S1][S2]


Namit Agarwal – Representative of the World Benchmarking Alliance, focusing on AI governance, investment incentives and accountability.


Peggy Hicks – Director, Office of the United Nations High Commissioner for Human Rights (OHCHR); moderator of the panel. [S5][S6]


Parvati Adani – Representative from Sero Amarchan Mangaldas (law firm), delivered the concluding remarks.


Tim Curtis – Regional Director for UNESCO South Asia; co-sponsor of the session. [S11][S12]


Ankit Bose – Senior executive representing NASSCOM, India’s technology industry association.


Hector Duroir – Director of Responsible AI Public Policy, Microsoft.


Alex Walden – Global Head of Human Rights, Google. [S17]


Yuchil Kim – Vice President, AI Research, LG.


Additional speakers:


Praveen – Mentioned in the closing remarks; role not specified.


Dhani – Mentioned in the closing remarks; role not specified.


Allie – Addressed by Peggy Hicks near the end; role not specified.


Full session reportComprehensive analysis and detailed insights

The session opened with Peggy Hicks (BTEC, Office of the High Commissioner for Human Rights) reminding participants that AI-related challenges affect people’s daily lives [?] and that “global standards, collaborative public-private solutions, and rights-based approaches can enable responsible AI with meaningful real-world impact” [2]. She emphasized that responsible outcomes do not happen automatically; they require “deliberation… engagement” to avoid pitfalls [4-5] and that companies must share “good practices” while governments create a “level playing field” and incentives for responsible conduct [12-14]. Hicks framed the BTEC project as a mechanism to convene stakeholders, extract best practices and feed them back into policy, noting that the work is anchored in UN tools such as the UN Guidelines and UNESCO’s AI ethics recommendations [15][23-24].


Tim Curtis (UNESCO) shifted the discussion to the foundations of trust, arguing that “trust is not something technology earns through ambition alone but really it is earned through design choices, through safeguards and accountability” [32]. He thanked the Office of the High Commissioner for Human Rights for its support [?] and explained that UNESCO’s Readiness Assessment Methodology Reports (RAMS) have been produced for more than 80 countries, providing a “clear-eyed look at how regional landscapes can evolve” and moving the debate “beyond theory” [33-35]. Curtis noted that the RAMS include an assessment of India [?] and that partner institutions such as Oxford and the Alan Turing Institute contributed [?]. To translate the UNESCO ethics recommendation into practice, UNESCO and LG AI Research are launching a massive open online course (MOOC) on “ethics-by-design” [?] that will be delivered on Coursera [?] and is “accessible to a wide global audience and provides practical, day-to-day tools” [?]. The MOOC is intended for practitioners who need concrete tools to bridge the gap between abstract standards and daily work [209-213].


Rein Tammsaar (UN-mandated Global Dialogue on AI Governance) outlined the Dialogue’s four member-state priorities: (i) safe, secure, and trustworthy AI systems; (ii) closing capacity gaps in developing economies; (iii) interoperable, cross-border governance; and (iv) anchoring AI in human-rights law, including protection of vulnerable groups [66-74]. He positioned the Dialogue as a “platform where governments and stakeholders exchange best practices” to strengthen international cooperation and reduce digital divides [65-66] and stressed that standards turn principles into action, shaping risk management, accountability and human-oversight [78-79].


Ankit Bose (NASSCOM) described the association’s 2021-initiated mission to fill the “responsible, trust, human element” gap that emerged as AI proliferated [95-98]. NASSCOM’s core objectives are to develop open assets, build capacity across government, startups and SMEs, and promote early adoption of responsible AI governance [98-104]. Bose highlighted internal silos – tech, business, legal, finance – that impede coherent action and argued that “collaboration… use-case by use-case” is needed, especially for high-impact projects [110-118]. He warned that many national and sectoral frameworks leave developers “lost in the framework” and called for “concrete, actionable guidance” [258-267].


Alex Walden (Google) detailed how the company embeds human-rights-based values into its AI lifecycle. Google has a corporate policy committing to the UN Guiding Principles on Business and Human Rights [?] and an internal set of AI principles that operationalise those values across products such as Cloud, YouTube and Search [135-137]. Governance is layered: model-level requirements and testing, application-level guardrails, executive review of risks before launch, and continuous post-launch monitoring to catch “novel or residual risks” [149-162]. Walden also described programme-level tools – regular stakeholder-engagement processes, “trusted-tester” schemes that give third parties early access, and the Impact Lab’s “Amplify Initiative” which lets communities fine-tune language models in an open-source fashion [300-310].


Hector Duroir (Microsoft) traced the evolution of Microsoft’s responsible-AI framework from its 2018 inception, when “codes, directives, regulations… were not yet there” [170-172], to the creation of the Office of Responsible AI (2019) and the Sensitive Use Case programme that triages high-risk applications and escalates them to the ITER ethics committee involving senior leadership [179-182]. Microsoft aligns its standards with the OECD AI Principles and UNESCO’s recommendation [184-185] and leverages voluntary commitments signed at AI summits – including Letchley Park and South Korea [188-192] – to ground model testing against public-safety and national-security risks. Duroir also cited the OECD hyper-reporting framework and India’s recent voluntary commitment to multilingual safety evaluations, which “encourages companies to forge multilingual capabilities” [193-199].


Yuchil Kim (LG) echoed the need for practitioner-focused tools, noting that LG’s AI-powered data-compliance system is part of its responsible-AI toolkit [?] and that its annual AI ethics accountability report (now in its third edition released “yesterday”) provides “the best standard risk” guidance and transparent documentation of AI activities [210-213]. Kim stressed that the UNESCO MOOC will “bridge the gap” for practitioners who struggle to apply abstract standards in daily work [209-210] and that transparency, inclusive AI and multilingual considerations are central to LG’s roadmap [214-218].


Namit Agarwal (World Benchmarking Alliance) presented the results of its latest assessment of 2 000 tech firms: while roughly 40 % disclose AI principles, only just over 10 % meet global governance expectations and none publish human-rights impact assessments [226-229]. From this gap, WBA derived three investor-focused recommendations, enumerated in three consecutive lines: (i) board-level AI risk responsibility and aligned executive incentives; (ii) product-level governance checks that translate ethical principles into concrete strategies; (iii) robust AI-specific human-rights impact assessments with public summaries [236-241]. Agarwal argued that “capital can definitely incentivise innovation and responsibility, but capital alone cannot do that” and called for a “race to the top” driven by clear market expectations [226-232].


Across the panel, participants repeatedly agreed that global norms and practical safeguards are essential for AI to work for all people, not only advanced economies. This consensus was voiced by Hicks, Curtis, Tammsaar, Walden, Duroir and Kim, who all linked UNESCO recommendations, UN Guiding Principles and OECD standards to concrete safeguards [2][9-11][13][32][67-74][138-140][184-185][209-213]. They also concurred that capacity-building tools such as the UNESCO RAMS assessments, the forthcoming MOOC, and NASSCOM’s ecosystem-wide training are vital to turn theory into practice [32-35][36-44][89-105][209-213]. Finally, there was broad agreement that multi-stakeholder engagement-including civil society, academia, NGOs and investors-is indispensable for inclusive, culturally aware AI, as reflected in the statements of Walden, Duroir, Tammsaar, Kim and Agarwal [300-310][276-285][73-75][209-213][322-338].


Points of disagreement (bullet list):


Regulation vs. voluntary commitments – Hicks calls for “clear, enforceable rules… and alignment around the global norms” [9]; Duroir emphasizes “voluntary commitments… at AI summits” as the primary mechanism to operationalise standards [188-192].


Proliferation of frameworks – Bose says developers are “lost in the framework” and need “concrete, actionable guidance” [258-267]; Curtis maintains that UNESCO’s RAMS and the MOOC already provide a unified foundation [32-35][37-44].


Incentive design – Hicks promotes a broad “race to the top” through market rewards [13]; Agarwal insists that incentives must be tied to specific board-level governance, executive incentives and impact-assessment requirements [226-232].


Thought-provoking remarks shaped the tone of the discussion. Curtis’s framing of trust as a design problem [32] set the agenda for concrete engineering solutions. Tammsaar’s succinct articulation of the four UN-derived priorities [66-73] gave the panel a shared roadmap. Walden’s description of Google’s multilayered governance-model-level checks, executive sign-off and post-launch monitoring-provided a vivid example of operationalising ethics [149-162]. Duroir’s account of Microsoft’s Sensitive Use Case triage and ITER committee illustrated board-level oversight [179-184]. Agarwal’s data point that “only about 10 %… meet global governance expectations and none disclose human-rights impact assessments” [226-229] underscored the compliance gap. Parvati Adani’s philosophical probe-asking an AI tool whether it has ethical limits and receiving “I don’t know” [322-332]-reminded the audience that AI lacks self-awareness and therefore requires human governance. Kim’s African proverb, “If we want to go fast, go alone. If we want to go far, go together,” encapsulated the collaborative spirit needed for a trustworthy ecosystem [294-296].


Concrete next steps were identified:


– UNESCO’s MOOC will be delivered on Coursera, with an open invitation for learners and partners [36-38].


– The UN Global Dialogue on AI Governance will reconvene in Geneva in July [46-50].


– Companies such as LG and Microsoft pledged to publish annual AI-ethics accountability reports (LG’s third edition is already released) [210-213][276-283].


– Microsoft will continue community-led benchmark projects like Samishka in India to develop multilingual safety tools [282-285].


– NASSCOM will expand capacity-building workshops and open-asset libraries for startups, SMEs and government agencies [98-105].


– The World Benchmarking Alliance will circulate its three-step investor engagement framework-board oversight, product-level checks, impact assessments-to catalyse market-based incentives [236-241].


– Participants agreed to share best-practice case studies with the WBA for inclusion in future benchmarking reports [?].


Unresolved issues remain. There is no consensus on how to harmonise the growing number of national and sectoral AI frameworks into a single actionable roadmap for developers. Financing mechanisms to close capacity gaps-particularly infrastructure, compute and talent in developing-country firms-were not settled. A standardised, auditable methodology for AI-specific human-rights impact assessments is still lacking. Scaling responsible-AI processes for small startups without over-burdening them, and establishing clear ownership and frequency for post-launch monitoring, also require further work. Finally, integrating multilingual and informal language contexts into safety tools beyond ad-hoc community projects remains an open challenge.


In closing, Peggy Hicks urged participants to translate the day’s insights into action, reminding them that “AI innovation will work if there’s trust and if the companies that are delivering it actually invest in delivering products that will really give us human dignity” [362]. She thanked the participants and closed the session [?]. The panel reaffirmed that responsible AI requires coordinated global norms, concrete capacity-building tools, and market incentives, and they committed to share best-practice case studies and continue dialogue at the July Global Dialogue in Geneva [?].


Session transcriptComplete transcript of the session
Peggy Hicks

These are consequential challenges that have impacts in people’s lives on a day -to -day basis. And our session is going to address how global standards, collaborative public -private solutions, and rights -based approaches can enable responsible AI with meaningful real -world impact. And we know that these things don’t just happen on their own. It takes deliberation. It takes thought. It takes engagement to make sure that the products and approaches that we’re using in the AI field avoid some of the pitfalls that may be associated with them. And the companies are going to share some of the good practices that they’re engaging in about how that works in the real world. And we know if they don’t engage in that way, that the risks are there and very much present.

And so looking at how we can put in place practical safeguards that ensure that AI works for people, not only in advanced economies or for the dominant platforms, but for the people that we’re trying to deliver these benefits for. Responsible and effective AI governance and clarity of rules for both companies and government. and alignment around the global norms will help us to get to that point. Companies, of course, have a responsibility to respect human rights and address the risk to people stemming from their products. And human rights due diligence, of course, is one of the process -based ways and a pragmatic way to weave this into corporate operations. But, of course, governments are the ones that also have a responsibility here, too, to create a level playing field, and we talk a lot about that.

We want the incentives for companies to be there so that the ones that are engaging responsibly are actually rewarded for that responsible engagement as well. Our BTEC project at OHCHR is aimed at how do we make this conversation happen. So through convenings like this one, through engaging with companies, pulling out their good practices and letting all of you hear about them and encouraging others to do the same is what that project is really about. And we are really looking at and working with, of course, how to use tools like the UN Guidelines. We’re working with the United Nations, the United Nations, and the United Nations to try to get the best out of the work that we’re doing.

And we’re working with the United Nations to try to get the best out of the work that we’re doing. So we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. And we’re working with the United Nations to try to get the best out of the work that we’re doing. UNESCO’s AI recommendations on ethics and figuring out how we weave those into the decisions and work that’s being done now.

And as I said, bringing this conversation to this summit where there is truly a global and multi -stakeholder effort is happening to really look at AI innovation and deployment has been incredibly important. So without further ado on that front, I want to hand over to my colleague and co -sponsor here, Tim, over to you.

Tim Curtis

Thanks, Peggy. And good morning, everyone. Ambassador Reintesma from Estonia and co -chair of the AI Dialogue that the United Nations is holding. Of course, Peggy and dear panelists, it’s really wonderful to be here with you today to be part of this conversation on responsible practices and industry standards. And as we all know now where AI is moving, you know, from something we discuss in theory to really something that is shaping the decisions in real time and real institutions. and of course for real people. I’d like to thank particularly the Office of the High Commission for inviting us to join in on this and for working with us on organising this event. It’s been a pleasure.

At UNESCO we often return to a simple idea that trust is not something technology earns through ambition alone but really it is earned through design choices, through safeguards and accountability and that’s why the recommendation on the ethics of AI we believe is so important because it does give the world a shared foundation, a first step on how AI could be built and used in ways that protect people’s rights, that promote fairness and support inclusion. So we’ve been translating this global agreement and framework into local realities and through what we call the RAMS, the Readiness Assessment Methodology Reports which we’ve now launched in around over 80 countries and just two days we did India’s readiness assessment report.

And these assessments provide a kind of clear -eyed look at how regional landscapes can evolve, inviting us to move beyond theory and towards this responsible human -centred deployment of AI we hear about. And so by grounding innovation in these evidence -based diagnostics, we hope to ensure that progress remains aligned with those shared values. But, of course, a recommendation only matters if it can be applied by people who are actually catering, creating and using AI. And so that’s the purpose of the initiative I’m going to introduce today, and I’m very happy to say that UNESCO, in partnership with LG AI Research, is developing a global massive open online course, or a MOOC, as more commonly known, on the ethics of artificial intelligence.

And the course will be delivered on Coursera with a very clear goal, to make AI ethics learning accessible to a wide global audience, and to make a practical… for day -to -day work. And so, as I mentioned earlier, the key idea behind the MOOC is ethics by design. And so in simple terms, that we don’t wait until something goes wrong to ask these ethical questions. We should build these questions into the process from the beginning. And the course will help learners think through issues like fairness, transparency, safety, accountability and inclusion at the stage when decisions are still being made rather than after systems have already been deployed. The course, of course, is really going to be focusing on practical tools so that we can offer clear ways of thinking and working that can be used in everyday settings.

So it’ll help learners, for example, recognise common risks early, ask better questions during development, document the decisions made responsibly and think through the impact of AI systems on different groups of people. we’re moving beyond a one size fits all approach and we’ve done this by collaborating with experts from over 10 countries and 5 continents with some of the leading minds from the University of Oxford and the Alan Turing Institute and this global group, this global coalition is really vital because AI of course doesn’t operate in a vacuum it’s shaped by languages, it’s shaped by cultural norms and institutional capacities and of where it is developed and deployed so by integrating these diverse perspectives we’re trying to move from the theory again to the live reality so ultimately this MOOC is a capacity building effort with a simple purpose to help more people around the world build and use AI in ways that are responsible, inclusive and worthy of public trust we look forward of course to this continued collaboration with governments, with industry, with academia and civil society as we try to move forward we take it forward and we hope many of you will engage with the course when it launches, not only as learners, but also as partners in building a stronger culture of ethical innovation across the world.

Thank you very much.

Peggy Hicks

Thanks, Tim Curtis, UNESCO. We’re all looking forward to it. Now we have anticipation. We’re very fortunate to have an addition to our program today with Ambassador Tomsar, the permanent representative of Estonia, who’s one of the co -facilitators for the Global Dialogue on AI that will be launched in July, and a big responsibility. And he’s here to tell us a little bit about where it’s heading and how you all can contribute. Please, Ambassador.

Rein Tammsaar

Thank you very much. Good morning. I don’t know, is it morning? Yeah, maybe. So after three days here in India, I think that I lost time. I’m not track of understanding. Is it morning or evening? But thank you, UNESCO and Office of the High Commissioner for Human Rights. for convening this really important discussion, and of course to all our hosts here in India. And I also thank partners who contributed to this work. Today I’ll speak on behalf of two co -chairs of United Nations Global Dialogue on AI Governance, and two co -chairs are from Salvador and Estonia. The first Global Dialogue on AI Governance was mandated by all member states through a General Assembly resolution adopted in August 25.

So this is a member states driven process. It belongs to every country, to all member states. And its task is very practical, while its scope is multilateral. So this… The aim is, you know, to come together. It is a platform where governments and stakeholders exchange best practices and experiences, and this, we believe, can strengthen international cooperation on AI governance and ensure human -centric AI supports sustainable development and reduce, indeed, digital divides that are already there. So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attention four points from these priorities. So first, they want safe, secure, and trustworthy AI systems, and the trust here, of course, is an absolute key word.

Second, they want to close capacity gaps. Many developing countries need infrastructure, skills, and compute to participate fully in AI economy, and inclusivity and equal access are essential here. Third, they want governance approaches that can work across borders. and be practical. So fragmentation raises the cost and weakens trust. So interoperability is absolutely key. And fourth, and that is, I think, quite actual here, they want AI anchored in human rights and international law. And this includes protecting vulnerable groups, addressing bias and discrimination, and ensuring oversight and accountability. Now, we know human rights are not optional. They are part of a mandate agreed by member states. And today’s focus on responsible practices and industry standards responds directly to these priorities.

And standards turn principles into action. They shape risk management, they clarify accountability, they guide human oversight, and they give companies and regulators tools they can apply in real systems. so let me say that the global dialogue will not and I guess it cannot impose one single model we will listen we will identify common ground we will build on existing initiatives ethics of AI was mentioned here and it’s of course one of them we’ll avoid or try to avoid duplication and we will focus on practical value so I encourage you to bring your experience into this process share what works share what doesn’t work help us identify approaches that can scale across regions and level of capacity and in best case if we succeed and failure is not an option safety and trust will be visible in how systems are designed deployed and governed they will be reflected in real safeguards and in benefits that reach more people and this is very important for us thank you So I thank you and wish you a productive day, practical exchanges that move our common work forward.

And with this, I give it over to the real experts and panel. Thank you very much.

Peggy Hicks

Thank you, Ambassador. Wonderful to have you with us, and I think we’re all looking forward. We’re looking forward to having all of you join us in Geneva in July. So with that introduction by the three of us, we’re really, as the Ambassador said, going to turn it over to those who can really inform us about how this work is happening and I hope inspire us to both give support and emphasis, amplification to the work that you’re doing and bring more into the fold around responsible business conduct. So with that introduction, I’d really like to start, Ankit Bose, with you from NASSCOM. We had a great conversation yesterday. I’d love for you to inform our audience that NASSCOM represents the leading Indian tech industries and we want to hear more about your work and what you’re doing to encourage companies and help them to ensure a responsible work environment.

Thank you.

Ankit Bose

Thank you so much for having me here. And it’s my pleasure here to address the audience. So NASCOM has been there for almost four decades plus, right? We have been helping the tech industry in the country to shape and change the whole agenda for the country. I think that’s what we have been doing, specifically on responsible AI interests, right? I think the mission for NASCOM started in 2021. So we started with a gap that, you know, we were seeing a lot of AI was getting developed. But again, I think we found that there was some missing element that was the responsible, the trust, the human element. I think that that is how the mission started. From that point in time, our main core objective has been to develop open assets, right?

Build capacity, build, you know, adoption, right? And I think help all the different components in the ecosystem, right? Right from the government to the startup, the SMEs, right? All of them. So we have been trying to help them. And we’re trying to help them go up the ladder. and then really become aware, I think, not only the gloomy side of AI, but also the bright side if they adopt responsible AI governance practices right at the early. They can have a big upside. I think that’s what we have been doing.

Peggy Hicks

Can I ask, Ankit, how does this work? You mentioned that full range of companies that are involved, and one of the topics we spoke a bit about yesterday is the difficulties sometimes when you have big companies, we have some of them represented here, but also startups and small and medium enterprises. How do you differentiate? How do you make sure that we’re engaging across that very differentiated group of industry?

Ankit Bose

Yeah, so I think if I take it, I think there’s the big techs, right? Then there are the services companies. Then there are the middle -sized, small, and startup. I think all five of them have different sort of engagement, right? The big tech, I think, are playing at the front foot, right? The services companies have to follow their contracts, right? The bigger services companies. The medium tier companies, they are really trying to understand how they grow their AI base at the same time build, you know, that services or product using right governance principles. But again, I think the bigger support is needed from the, you know, the smaller startup, right? Because they are really, really fighting for day to day, right?

I think, and believe me, I think a startup founder has to first build a business, a tech, a team, right? And also get funding and apparently focus on, you know, a lot of things around governance, right? I think in that whole journey that we have seen, they are putting it at a second or probably the side burner, which is something which we see is a complete no -no. If you do that, you know, when you’re building a product, right, you might miss when you’re scaling. I think that’s what we are.

Peggy Hicks

Great. Thanks very much. I think we’re going to turn to the scale side of it now with Alex, you’re next in line. So, Alex Walden, you’ve been working on these issues within Google, and I think one of the insights that I’ve learned from you over the time we’ve known each other is really how complex it is to bring to product teams and those that are on the technologist side some of these issues of responsible business conduct and human rights, and give us the benefit of your wisdom about how that works and how we can do it better.

Alex Walden

Thanks for the question, and I love that you said that because I do see a very important part of my role as making sure that the stakeholders that we work with understand how things are working within companies because that helps us be better and you be better advocates for helping us improve. So, anyway, but to your question, because I know I’m going to be fast, I think where it really starts for us is from the values perspective. Obviously, we’re a company that’s founded on values around freedom of expression and privacy and bringing the benefit of our technology to everyone, and so that is where… That’s where it begins. But obviously, we have things like… Like, ultimately, it’s the sort of governance inside of the company that is what permeates throughout the 180 ,000 people that work at Google to ensure that we are being responsible in the way that we’re developing AI.

So as a baseline for us, responsibility and thinking about what responsibility means has to start with human rights, and then we can build from there. So we have a corporate policy that says that we have a commitment to respect the UN guiding principles on business and human rights. And we’ve also built on that with things like our AI principles that reinforce sort of more of an operational way in which we can manifest those values in all of the teams that are working to develop the various models or applications of the models in, say, Google Cloud or YouTube or Search. Just to maybe hone in a little bit on the types of standards that we’re using, because I think that’s important because there’s so much work being done in our ecosystem.

We use the UN guiding principles. We use the work happening. We use the work happening at the OECD, the work at UNESCO. engagement with our peers in industry through the BTEC project, through global network initiative, and this is just a few, but all of the guidance that comes out of those places and the dialogue that happens there helps us ultimately inform how things are working inside the company. And then just one layer down, then I’ll stop. I think having programs and processes like training and dedicated teams, ultimately that’s how you operationalize this through getting a product to market. And so I can say more, but I think those are kind of the big picture structures for what’s required for a company to do this at scale.

Peggy Hicks

So, you know, I’m not going to let you off the hook quite that easily. So we know that this isn’t always easy, though, right, that there are obstacles to really convincing people it’s worth the time. I’ve been in the room where hand -wringing is described as the, you know, no more hand -wringing about safety. We’re going to, we need to just move forward, and I’m sure there are pressures. that you face as the lead for human rights within this company trying to get your message heard. Tell me a bit about how you’ve been able to sometimes surmount some of those necessary challenges that you might see from that different perspective on whether or not these are hurdles or supports for the company to do its mission more effectively.

Alex Walden

Well, I mean, I think in general we have sort of corporations are incentivized to put products on market that are safe and that are trusted by our consumers. People know Google best through Google Search or Gmail, the varieties of consumer -facing ways they’re engaging with our products. And so we do have an inherent sort of market business reason to put out products that people trust and deliver good outcomes. And so we have to have processes inside that… that make that real. And so what we do is we have model requirements just at the most granular level. Before any product goes to market, there are model requirements. And so those teams are focused on ensuring that they’re validating the data and doing testing and doing evaluations.

And that’s at the model level. And then at the application layer, we have requirements for teams to be, again, doing testing, additional evaluations, setting additional guardrails, and focusing on what mitigations are going to be put in place for, again, things like Gemini before that gets launched. And then we have to have executives review these things. So before anything goes to market, leadership needs to understand what the risks are and how we’re mitigating them and have a plan in place to address that. So that is an important part of the process for us. And then last, we have post -launch monitoring, because obviously we can do all the testing in the world, but once you’ve launched a product, you have to be able to do all the testing.

there may be novel or new or residual risks that arise. And so we have to have a process for continuing to monitor that, understanding it, getting feedback, improving and improving

Peggy Hicks

Great. That’s super helpful, Alex, to understand that multilayered approach that needs to happen within companies, including, I think, that executive level that you mentioned. I mean, the signals from on top will actually inspire all of those other levels to do what we’re hoping they’ll do. And we have another example with us of some of these practices. I want to turn to Hector Duroir, who’s the director of responsible AI public policy at Microsoft. And we want to hear more about what you’re doing to embed responsible policy practices within Microsoft’s approach.

Hector Duroir

Thank you very much, Peggy, and thanks for having Microsoft here. So, yeah, I want to start with the inception of our responsible AI approach, which was in 2018. And at that stage, you didn’t have codes, directives, regulations. Frameworks guiding our approach. We’re nearly starting from a blank page. And we didn’t talk about foundational models or frontier models at that stage. It was all about specific AI system and applications, such as facial recognition, for instance, which was very popular. So we forged our AI principles around priorities such as privacy, such as reliability, inclusion, fairness, safety, security. And these high -level principles, the whole challenge was to translate them into practice afterwards. And so it’s really on this basis that we forged the Office of Responsible AI when we created it in 2019, around these principles, which then became our RAI standard, guiding all our actions across our different programs.

One of the programs that I want to reference here is our Sensitive Use Case program. So it’s a team within the Office of Responsible AI that is in charge of doing some triage, challenging basically sensitive use case coming from our different markets. on AI systems and models that could actually violate these principles that I was referencing. And so this team analyzed these use cases and then when it occurs that it’s necessary, bring them to our ITER committee, which is our AI ethics committee. And it involved Microsoft Board, both at the CTO level and the present level. And I think the board inclusion is very important in this kind of internal risk management framework. And so this work has been informed during the past years by many interesting developments.

So the OECD AI principles, obviously, but also the UNESCO recommendation on AI ethics. And I think all these principled approach that evolve or refine nuanced with AI capabilities are actually so important and very useful signals for us to refine our own AI governance program within Microsoft.

Peggy Hicks

Hector, you’ve talked a little bit about how you look at it from an internal perspective. But we wanted to hear a bit of how you look externally, like what are the drivers between how you engage. across the sector and with the government side as well.

Hector Duroir

Yeah, and I think we always navigate this very interesting interplay between best practices and international norms and regulatory standards. And a very good example here is the line of voluntary commitments that have been signed across the AI summits. And so if you look at Letchley Park in the UK or the South Korea summit that happened afterwards, it really helped us, as Alex was referencing, to ground our model testing approach, especially against public safety and national security risks. So when we talk about cybersecurity, for instance, or loss of control, or CBRN risks, that really grounded some very solid testing approach with some concrete operational triggers, concrete high -risk domain that we’re monitoring at the model level.

So that was one. The OECD hyper -reporting framework, which came out of the Hiroshima AI process, is another very good tool that I was involved in and I want to reference here. It was launched along the lines of the Paris AI Summit. And actually, it’s a very good way to understand how risk management transparency works in practice and how real -world deployment experience and transparency experience can guide upstream developments. And so it’s this kind of feedback loop that it creates that’s very interesting. And because we’re in Delhi, just to reference the voluntary commitments that were signed yesterday, I think that’s another very, very, very good and positive approach that the Indian government have been taking, especially on one of the commitments which basically encourages company to forge multilingual capabilities approach.

So basically build better evaluations against safety risk, not only in English norms, but beyond English norms. And I think that speaks about, you know, our principle of inclusion. That’s so important, and I’m very happy that they initiated this work.

Peggy Hicks

I have to say, one of the contrasts I’ve been making when I look at what’s been talked about here in Delhi as opposed to… prior summits is that issue of inclusion. And the language issue, I think, is so underrepresented in some of the conversations we have. So it’s wonderful that you’ve given that a shout out. We’re very fortunate, Yuchil, to have you with us as well. Yuchil Kim, who is the vice president at LG for AI research. So we’d really like to hear more about how you’re engaging with these global technical and policy standards. We talked about the UN Guiding Principles on Business and Human Rights, the UNESCO recommendation on AI ethics, and, of course, the MOOC that’s being worked on.

So give us a sense of how these frameworks are being engaged with by LG.

Yuchil Kim

So the essential of MOOC is for the practitioner The practitioner usually is struggling with the same question How do I actually apply this in my day -to -day work? So we are focusing on the bridge in the gap So we provide the best standard risk So we get a lot of risk that Timothy mentioned So we also contribute to our own experience So I previously mentioned about our process And we made also our AI -powered data compliance system And also I will mention soon We have an annual report about AI ethics activities So I hope the MOOC can be a good practice for everyone It will launch in this half So last one is I want to talk about transparency So we have a lot of activities about the AI Responsible AI Inclusive AI So we published our annual accountability report on AI So yesterday we released the third edition.

So here are some some track of that. I will spread out after our session. So please refer my documents.

Peggy Hicks

Well yeah. Wonderful. No I think it’s super interesting to understand both how you’ve been looking at that learning process within the company but also how that more global approach working with UNESCO is going to be very helpful and I think it’s one of those areas where we all know so much more needs needs to happen. But we’ve we’ve heard the the company perspective here and we’re very fortunate to have with us from the World Benchmarking Alliance, Namit Agarwal. And Namit, you know, I think one of the things that we’ve talked about is how we incentivize the race to the top amongst all of these actors in this space. And you’re going to, I hope, give us some some insights based on the the work that the World Benchmarking Alliance is doing about how capital and investment can be used to make sure that innovation is being approached in a responsible way.

Over to you, Nami.

Namit Agarwal

Thanks for having me here. And I’m not representing investors, but we do work with several stakeholders, including investors, civil society, governments, and companies. So we are a nonprofit, and we try to strengthen accountability of the world’s most influential companies so that their impact on people and planet can be sustainable. We also assess the world’s most influential tech companies on whether they are advancing a trustworthy, rights -respecting, and inclusive digital future using standards such as the UN Guiding Principles, but also others that were mentioned by my fellow panelists here. Our role is to provide comparable, credible, and standardized data that our stakeholders can use because of the challenges that we face. Because it’s an ecosystem approach, so how can they work together in doing that?

So capital can definitely incentivize innovation and responsibility, but capital alone cannot do that. We published our latest assessments of 2 ,000 companies at Davos last month, and particularly from the tech side, what we found is close to 40 % of the companies have disclosures on AI principles, but just above 10 % meet the expectations, global expectations on the governance aspect of it, and none of these 200 companies that we assess disclose their reports on human rights impact assessment. And I think that clearly shows that while there is a lot of intent, some work is happening, but governance and accountability are not really there, so we need a lot of work to happen there. And we believe responsible innovation requires incentives for long -term risk management, clear expectations that are tied to capital.

So I think that the way we’re going to do this is to look at the market, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and then we’re going to look at the market as a whole, and consequences for weak governance because it has to be consequential for companies to move in that direction.

And I think that is where investors have a very catalytic role to play. We convene a coalition of investors and civil society organizations

Peggy Hicks

Now, I mean, I think it’s so interesting that we work in a sector that is incredibly based on data, but yet we don’t necessarily bring it into this conversation in the ways that we need to, and that idea of both incentivizing the right practices and leverage within companies, but also, you know, too many conversations sort of focus on the tech industry as a whole and sort of group everybody together as if they’re all engaging in the same way. And so the work that you’re doing really helps us to understand those nuances more. Could you go a bit deeper and look a bit at some of the examples and concrete suggestions coming out of your work as to how to push that discussion forward more?

Namit Agarwal

Absolutely. So I think the first thing is engagement and dialogue, and I think that is a very important way. And we have been fortunate to have good engagement with both Google and Microsoft on this panel. but again it’s important to build on engagement because it’s a continuous process it’s important for investors to engage with some of the leaders but also engage with companies who are fence -sitters to bring them along faster the laggard will definitely catch up and come on board but for investors and if you want to for capital and finance to incentivize responsible innovation, responsible AI there are three things that we believe investors should definitely do first is on AI governance and board oversight investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are aligned with long term human rights risk mitigation and whether governance applies across the full AI value chain second is on implementation at product and business model level and we heard some examples just now investors need to move beyond policy statements and ask companies how ethical principles are translated into product level strategy How high -risk use cases are identified and whether there are internal mechanisms and controls to, you know, identify harms as they emerge.

And third is robust human rights impact assessments and asking whether companies conduct AI -specific impact assessments. Are they publishing meaningful summaries? Are mitigation measures integrated into product cycles? And I think this is an area where we have seen a lot of gaps.

Peggy Hicks

Great. Thanks, Namek. I wonder if we could actually take that one step further and get some input from the other members of the panel on what that looks like in practice. Because, of course, this is a panel that’s focused sort of on the company perspective. I think we have some of our real partners here on the civil society side. And as much as they understand that that conversation needs to happen, I think they sometimes find it difficult to be able to make sure that the way those risks are assessed really looks like bring in the voices, bring in the experiences of people, and particularly people in different contexts and different environments in which the companies are being, their products are being rolled out.

So those issues of stakeholder engagement, access, dialogue with the civil society side, it would be great to hear a little bit more about some of the lessons that you’ve all learned there. And I see you shaking your head. Please tell us from the NASSCOM perspective how you look at it.

Ankit Bose

Well, I think from an enterprise lens, right, I think when they are trying to implement responsible AI or trustworthy AI, right, I think the biggest issue is there are different groups internally, right, the tech group, the business group, the legal risk group, right, the finance group, right. And then all of them are working in silos, what we feel like, because the business want the best, you know, for the business, the tech wants to put the best technology, right. The risk is very, you know. Right. conservative, right? And finance always has upper limit on what they want to spend. So that’s what issue. I think what helps is if all of them build a collaboration which can be taken use case by use case.

I mean, the high impact use case can have more investment, more focus versus a low risk, right? I think that’s the first thing. The second thing is I think what from NASCOM what we are seeing, there’s a lot of frameworks which are getting developed, right? Every country, every place you go, there’s a new framework, right? But from the framework heavy or the concept heavy to action is not happening. I think that’s a big gap, right? So if a technologist is trying to implement responsible governance, right? A developer is trying to implement, right? He will be lost in the framework. He doesn’t know what’s actionable. I think what he should do. So I think that’s one big need.

I think that’s what we are also driving. We are trying to drive a multi, you know, multi -different organization -led approach where we have all different sizes of organizations, where we come together and start discussing, collaborating, implementing, right? I think that’s the second nugget. I think these are two points. I know time is up.

Peggy Hicks

No, that’s great. I mean, I think it shows that that collaborative effort is going to be super important rather than a siloed approach for so many practical reasons as well, that companies can only respond to so many different frameworks. And what they need to do is have the simple guidance and support that they need to actually implement at this stage. Headquarter, do you want to say quick comments from the company side about how you’re facing those challenges?

Hector Duroir

Yeah, two very quick examples on how we involve civil society and academia in this process. So our work really sits at the intersection of policy, research, and engineering groups. And to inform product development with our responsible AI principles, we regularly publish some internal policies. And it’s an iterative process with our research teams, with our product teams. And as part of this process, we actually include academics who have a specific REST domain expertise or think tanks and civil society organizations which have been thinking very deep about the deployment of 1AI system, 1AI model in certain contexts. And so that really informs the product that we do from the inception. And I think the second example that was raised is the big topic and the governance challenge that we face is the importance of refining AI evaluations.

That’s the constant thing. And in India, for instance, we’ve been working with some NGOs around a project named Samishka to build some community -led benchmarks, which is basically a safety tool that we include afterwards in the system construction, to really get data sets that are grounded in a community with a specific… cultural aspects, specific contextual aspects, because if you just translate safety tools from English to another language, you lose all the context for which this safety tool has been built. And so that’s another example of really an area where we need more cooperation between civil society and governments and companies. It’s really how do we build these safety tools beyond English norms such as an India.

Peggy Hicks

That’s great, and it takes work to do that, and the more we can spread, you’ve done some of the work, you know how to do some of it, and diffuse it amongst other companies that could learn from it. That’s part of what we’re trying to do with BTEC, but I think there’s a lot more to be done. Yuchil, do you want to come in?

Yuchil Kim

Yes, I agree with his comment. The safety, we should work together. So that’s the reason why we make our annual report, because sharing our best practice and also sharing our struggles, what we had a struggle of, that we think that’s a very important thing. So this is my colleague mentioned about it. There’s an African proverb that said, if we want to go fast, go alone. If we want to go far, go together. So building a trustworthy and safe ecosystem is not a sprint. So it’s a long journey, so we can go together.

Peggy Hicks

It’s a long journey with a lot of sprints happening day to day, as far as I can tell. Some of them here at the summit, but over to you, Allie.

Alex Walden

So much sprinting. I think maybe just to pick up specifically on the stakeholder engagement piece. So a few things. One, I think it’s important for companies to have a programmatic approach to stakeholder engagement, so we need to have ways in which we’re regularly engaging with stakeholders in general, not just on a specific product question. But so I would say first a programmatic approach, and then second, something that is more ad hoc. So when we need to consult specifically on a product, we need to have a sort of process and way to do that. The other thing is we have programs internally like trusted tester programs where we are working with third -party organizations to make sure that they have early access or pre -launch access to models or to a product in order to test it so that we can identify potential risks or errors ahead of time and address them before we launch a product.

And then last, just to highlight something that we do is similar to others, our research team called Impact Lab, which is part of the overall human rights programmatic work at the company, engages directly with communities in doing research to inform how we are improving our products and what we’re developing. So that work is also happening through the research team specifically. And they recently launched something called the Amplify Initiative, which is an open -source app. This is specifically on language inclusion that allows… members of the public and communities to engage in the fine -tuning work around our language models. So specifically that there is a lot of, there’s a wealth of information and expertise out there that we should all be benefiting from, that we can benefit from, and it’s open source so we can also share it with others in industry.

Peggy Hicks

That’s great to hear, and I’m sure more needs to be done on that front, but the amplification effect is so crucial. Look, we could probably go on talking all day, but I see the clock is ticking down. Now, fortunately, rather than us try to draw the conclusions from this, we’ve welcomed in another speaker to give us some concluding remarks to pull some of these pieces together. I’m very happy to invite Parvati Adani from Sero Amarchan Mangaldas to help us think through some of these issues. Please.

Parvati Adani

Thank you for that. I think it’s partly easy and partly tough because there was a lot to understand, but I think we can go back from this. conversation. But firstly, thank you. You’ve held the conversation beautifully. And thank you to UNESCO, United Nations, NASSCOM, and everybody in this room who brought their knowledge, their conscience to this conversation. Actually, I just want to talk a little bit about a conversation with a machine. As we were thinking about this topic, and we were engaging on this issue, I wanted to share something that I might, I feel resonates with a lot of what you’ve talked about. I did something in preparation, I decided to ask the tools that we’re talking about over here, a question that we avoid asking ourselves.

Do you, I’m talking to the tool, do you have ethical limits? Do you understand the difference between what it can do and what it should do? And I’m going to quote verbatim. On conscious, at a conscious level, the answer is I don’t know. And neither does anybody. Nobody else. The gap is a philosophical, uncomfortable position where think of me as having no home inside. I have no continuous thread of existence and I cannot verify about myself what you have asked me. I don’t have any consequences to bear. Now, what came back, though unexpectedly thoughtful, showed us about restraints, values and what it appears to have internalized. It acknowledged the difference between instruction and conscience, a lot of what we’ve talked about today.

And so, I think when we talk about this, we said human rights are not optional. We cannot ignore the impact on people and planet. we have to make incentives for good governance so when a tool cannot understand this for itself I think we have to do the job what we have chosen in India and when we are having this conversation this location is not ceremonial, it’s very deliberate that we have thought about innovation over restraint and we have to think about that being the right choice we allow innovation to be in a safe place without feeling the weight of the regulation and I think we have a lot to learn from all of you who have been doing this for so long the privacy, the safety, the impact on children and vulnerable groups the question is whether the people that we are talking about are going to be the subjects of the transformation or just its audience or just the object An AI system that cannot understand a language or a Hindi woman speaking legal questions is serving a narrow slice of what it calls a universal solution.

So any framework for safe and trusted AI that does not express and understand informality, language, and gender is not incomplete by accident. It’s incomplete by design. I think the ideas of an interoperable, flexible system is a forward -looking and an inclusive one. I think a lot of what you mentioned, Alex, about governance inside the company, it’s wonderful what you’re talking about. And I think the voluntary commitments that have been reflected in this summit is also fantastic. So now we come to the harder work. The ambition is real. The infrastructure exists. But ensuring that… We don’t leave with just good intentions and good ideas, but action. Thank you.

Peggy Hicks

Thank you, Praveen and Dhani. It’s wonderful to hear those perspectives. We’re coming to the close of this session. Just a few parting words to all of you. I think we’ve done enough in this short conversation to really give a sense of how complex some of these issues are, the dynamics within companies and externally and then globally across different geographies, the challenges that are faced. But the reality is that all of us have a responsibility to engage on these, and we each have different roles. We’ve heard a bit about what some of the companies are doing. We’ve heard a little bit about how we can challenge them and incentivize the actions that they take in the space.

There are good practices, but they’re not universally applied. They’re not available to some companies. There are companies that may want to engage in this, and we can help them to do this. NASSCOM and I have been… We’ve been discussing a little bit about how we make, simplify things, bring in more into the fold of this conversation. And, of course, we’re here in an environment where we have governments that are looking at what do we need to do to create responsible business practices and incentivize them as well. So I hope everybody walks out of the room thinking, what can I do to continue this conversation? How can I differentiate between companies that are thinking about these issues in a way that will deliver for myself, for my children, for my future the ways that we want to see?

AI innovation will work if there’s trust and if the companies that are delivering it actually invest in delivering products that will really give those values, that will inform and give us human dignity going forward in the future. So thank you all so much for joining us. Thank you for fitting this into your schedule today, and enjoy the rest of the summit. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (35)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Peggy Hicks reminded participants that AI‑related challenges affect people’s daily lives and that global standards, collaborative public‑private solutions, and rights‑based approaches can enable responsible AI with meaningful real‑world impact.”

The knowledge base identifies Peggy Hicks as Director of Thematic Engagement at the OHCHR and notes her emphasis on responsible AI governance requiring deliberate thought and engagement, which aligns with the reported statement [S21] and [S2].

Confirmedhigh

“She emphasized that responsible outcomes do not happen automatically; they require deliberation and engagement, and that companies must share good practices while governments create a level playing field and incentives for responsible conduct.”

S2 explicitly states that responsible AI governance needs deliberate thought and engagement to avoid pitfalls, supporting the claim about the need for deliberation and a level playing field for companies and governments [S2].

Confirmedmedium

“Tim Curtis argued that trust is not something technology earns through ambition alone but is earned through design choices, safeguards and accountability.”

S122 frames trust as a foundational requirement that must be built through design choices, safeguards, and accountability, confirming Curtis’s point about how trust is earned [S122].

Confirmedmedium

“UNESCO’s Readiness Assessment Methodology Reports (RAMS) have been produced for more than 80 countries, providing a clear‑eyed look at how regional landscapes can evolve.”

S129 confirms the existence of UNESCO’s readiness assessment methodology, though it does not specify the exact number of countries; the claim about RAMS being produced is therefore supported [S129].

Confirmedhigh

“UNESCO and LG AI Research are launching a massive open online course (MOOC) on ethics‑by‑design, to be delivered on Coursera, accessible to a wide global audience and providing practical, day‑to‑day tools for practitioners.”

S1, S131 and S130 all describe a UNESCO-LG AI Research MOOC on AI ethics, delivered via Coursera, aimed at a global audience and designed to give practical tools for everyday work, confirming the claim [S1] and [S131] and [S130].

Confirmedmedium

“The MOOC is intended for practitioners who need concrete tools to bridge the gap between abstract standards and daily work.”

S131 explicitly states that the MOOC’s goal is to make AI ethics learning accessible and practical for day-to-day work, matching the reported purpose for practitioners [S131].

Additional Contextlow

“UNESCO’s measurement approaches, including the readiness assessments, aim to move the debate beyond theory toward practical implementation.”

S123 adds nuance by describing UNESCO’s measurement approaches as shifting from procedural requirements to trust-building mechanisms, providing additional context to the claim about moving beyond theory [S123].

External Sources (132)
S1
AI That Empowers Safety Growth and Social Inclusion in Action — – Peggy Hicks- Alex Walden- Rein Tammsaar
S5
Internet Human Rights: Mapping the UDHR to Cyberspace | IGF 2023 WS #85 — Peggy Hicks, Director of the Office of the UN High Commissioner for Refugees, participated in the session as a discussan…
S6
Digital Transformation for all: An Information Society that respects and protects human rights — – **Peggy Hicks** – Office of the High Commissioner on Human Rights (OHCHR) representative, panel moderator Peggy Hicks…
S7
New Technologies and the Impact on Human Rights — – **Peggy Hicks** – Director of the UN High Commission for Human Rights, human rights expertise Pablo Hinojosa: Please …
S8
AI That Empowers Safety Growth and Social Inclusion in Action — Parvati Adani from Sero Amarchan Mangaldas provided a powerful concluding perspective that reframed the technical and po…
S9
Keynote-Jeet Adani — -Moderator: Role involves introducing speakers and facilitating the discussion. Areas of expertise, specific role detail…
S10
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — So it gives me great pleasure to just present today’s panellists and moderator. We’ve had Dr. Tawfiq Jilasi, who’s Assis…
S11
Ethical AI_ Keeping Humanity in the Loop While Innovating — 339 words | 73 words per minute | Duration: 276 secondss This afternoon to this UNESCO sponsored event, my name is Tim …
S12
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Dr. Tawfiq Jilasi- Assistant Director General for Communication and Information (mentioned by Tim Curtis in introductio…
S13
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborat…
S14
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-future-sovereign-infrastructure-and-innovation-at-scale — Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborat…
S17
Open Forum #34 How Do Technical Standards Shape Connectivity and Inclusion — – **Alex Walden** – Global Head of Human Rights, Google Alex Walden, Global Head of Human Rights at Google, articulated…
S18
WS #42 Combating misinformation with Election Coalitions — – Alex Walden – Global Head of Human Rights for Google 5. Government pressure: Alex Walden, Global Head of Human Rights…
S19
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Alexandria Walden: Global Head of Human Rights, Google – Nikki Muscati: Audience member who asked questions (role/aff…
S21
Embedding Human Rights in AI Standards: From Principles to Practice — – **Peggy Hicks** – Director of Thematic Engagement at the Office of the UN High Commissioner for Human Rights Ernst No…
S22
What Proliferation of Artificial Intelligence Means for Information Integrity? — – **Peggy Hicks** – Director of the Thematic Engagement, Special Procedures and Rights to Development Division at the UN…
S23
Who Watches the Watchers Building Trust in AI Governance — The IVO model offers several potential advantages over traditional approaches. Independence ensures that companies are n…
S24
AI diplomacy — We are, in essence, searching for a common language to discuss AI ethics, safety, and security. We can see the early res…
S25
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Additionally, IFAP focuses on building capacities to address the ethical concerns arising from the use of frontier techn…
S26
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — A key principle underlying UNESCO’s approach is the recognition that “everybody will have a very different view and appr…
S27
Main Session on Artificial Intelligence | IGF 2023 — Seth Center:Is the answer yes to that? But how? The tricky question is the how. Let me rewind just a minute to the quest…
S28
Day 0 Event #174 Human Rights Impacts of AI on Marginalized Populations — There is a pressing need to enhance safeguards against the digital targeting of vulnerable populations, particularly LGB…
S29
WS #362 Incorporating Human Rights in AI Risk Management — Alexandria Walden: All right. Thank you. Thanks for that question. Thanks to GNI for putting this session together. I th…
S30
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S31
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — In addition to these key points, the analysis reveals a couple of noteworthy observations. One observation is the import…
S32
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — AI is a general-purpose technology that holds the potential to increase productivity and build impactful solutions acros…
S33
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Collaboration across sectors through multistakeholder engagement is essential responsibility Multi-stakeholder particip…
S34
Accessible e-learning experience for PWDs-Best Practices | IGF 2023 WS #350 — In the context of education, the analysis emphasizes the need for inclusion to be integrated into everyday practice in e…
S35
IGF Parliamentary track – Session 2 — 6. Capacity Building and Education Shuaib Afolabi Salisu: Thank you so much. Let me start on a note of appreciation to…
S36
Open Forum #17 AI Regulation Insights From Parliaments — AI governance requires ongoing education for all stakeholders – politicians, policymakers, and the general public. This …
S37
How Trust and Safety Drive Innovation and Sustainable Growth — Craig explains that Microsoft implements responsible AI governance programs internally and sees opportunities for differ…
S38
Secure Finance Risk-Based AI Policy for the Banking Sector — The moderator emphasizes that AI governance should not be viewed through a completely different lens but should be integ…
S39
AI Meets Cybersecurity Trust Governance &amp; Global Security — And Marie made a reference to the UN cyber norms process through the Open Internet Working Group, the group of governmen…
S40
Laying the foundations for AI governance — Artemis Seaford: So the greatest obstacle, in my opinion, to translating AI governance principles into practice may actu…
S41
TradeTech for Greener Supply Chains — Governments are viewed as key actors in closing the gap between technology disruption and regulation, and there is a pos…
S42
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — Incentives that can be driven by policy development, that can be driven by economic incentive creation Therefore, the f…
S43
Surveillance technology: Different levels of accountability | IGF 2023 Networking Session #186 — Investors can directly engage companies to improve their policies, practices, and governance. Investors have the potent…
S44
Digital divides &amp; Inclusion — Collaboration between government entities, private sector organizations, civil society, and academia is deemed critical …
S45
How to enhance participation and cooperation of CSOs in/with multistakeholder IG forums | IGF 2023 Open Forum #96 — The diversity of civil society and the global majority, including different languages and cultural norms, should be cons…
S46
WS #266 Empowering Civil Society: Bridging Gaps in Policy Influence — Audience member (unnamed) This is important to ensure that policies reflect diverse cultural perspectives, not just Wes…
S47
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Adamma Isamade stresses the need for a multistakeholder approach in policymaking. She argues that policies often lack in…
S48
Setting the Rules_ Global AI Standards for Growth and Governance — I think it’s worth backing up from this thing. One of the original questions was, what are standards for? Is Chris’s min…
S49
Bridging the AI innovation gap — This comment provides a profound reframing of technical standards from bureaucratic requirements to tools of global equi…
S50
Setting the Rules_ Global AI Standards for Growth and Governance — Key areas of convergence included the importance of process-oriented standards that can adapt to evolving capabilities, …
S51
Importance of Professional standards for AI development and testing — The disagreement level is moderate but significant for practical implementation. While speakers generally agree on the n…
S52
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — Do we have the ethics and the inclusivity that’s required? And so those are areas that I think we have seen practical ex…
S53
Embedding Human Rights in AI Standards: From Principles to Practice — 1. **Capacity Building**: Need for sustained education programs to help technical experts understand human rights princi…
S54
WS #187 Bridging Internet AI Governance From Theory to Practice — Hadia Elminiawi: Regional and international strategies and cooperations should not be seen as conflicting with national …
S55
AI That Empowers Safety Growth and Social Inclusion in Action — The discussion revealed tension between framework proliferation and the need for practical implementation guidance. Diff…
S56
Panel 3 – Legal and Regulatory Tools to Reduce Risks and Strengthen Resilience  — The level of disagreement was moderate and constructive. Speakers shared common goals of protecting submarine cable infr…
S57
Main Session | Policy Network on Artificial Intelligence — The discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreem…
S58
Policies and platforms in support of learning: towards more coherence, coordination and convergence — – (a) Common standards for needs assessment and evaluation of learning programmes; – (b) Coordination and possibly int…
S59
Secure Finance Risk-Based AI Policy for the Banking Sector — This identifies a fundamental legal and philosophical challenge that current legal frameworks are unprepared to handle. …
S60
Who Watches the Watchers Building Trust in AI Governance — Summary:The speakers demonstrated strong consensus on the urgency of AI governance challenges, the inadequacy of current…
S61
Discussion Summary: US AI Governance Strategy Under the Trump Administration — Ball emphasizes the importance of addressing AI surveillance and privacy concerns through specific, context-dependent so…
S62
AI Governance Dialogue: Steering the future of AI — Legal and regulatory | Development Martin emphasizes that effective AI governance requires local ownership and contextu…
S63
[Parliamentary session 1] Digital deceit: The societal impact of online mis- and disinformation — Effective governance requires different layers from core regulatory frameworks to voluntary commitments, as some aspects…
S64
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — Minimum safeguards with international coordination while respecting local specificities and strategic goals Principle-b…
S65
WS #133 Better products and policies through stakeholder engagement — The discussion also addressed challenges, including time constraints, the fast pace of technology development, and poten…
S66
Voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI — Companies intend these voluntary commitments to remain in effect until regulations covering substantially the same issue…
S67
AI That Empowers Safety Growth and Social Inclusion in Action — High level of consensus on core principles and challenges, with speakers from different sectors (government, companies, …
S68
Press Conference: Closing the AI Access Gap — Countries need robust data strategies that include sharing frameworks and data protection measures. These strategies are…
S69
Leaders TalkX: Local to global: preserving culture and language in a digital era — Cultural diversity | Content policy | Multilingualism Multilinguality and cultural diversity must be viewed as core pri…
S70
Inclusive AI_ Why Linguistic Diversity Matters — Arguments:Data sharing decisions should be context-specific based on whether they serve public interest versus private c…
S71
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S72
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — Multi-stakeholder engagement is essential but complex, requiring diverse expertise and perspectives
S73
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Another key point raised in the analysis is the importance of stakeholder engagement in AI governance. Stakeholder engag…
S74
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S75
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S76
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S77
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — AI is a general-purpose technology that holds the potential to increase productivity and build impactful solutions acros…
S78
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Collaboration across sectors through multistakeholder engagement is essential responsibility Multi-stakeholder particip…
S79
WS #187 Bridging Internet AI Governance From Theory to Practice — Multi-stakeholder governance approach is essential, but private actors’ commercial interests may limit participation in …
S80
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Multi-stakeholder cooperation and inclusive governance frameworks are essential
S81
How to make AI governance fit for purpose? — – **Multi-stakeholder involvement** – All speakers acknowledged the need for collaboration between governments, private …
S82
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — All speakers acknowledge that having strategies and frameworks is insufficient without proper implementation mechanisms,…
S83
AI That Empowers Safety Growth and Social Inclusion in Action — Major discussion point 2: Capacity building, education and operational tools
S84
Digital solutions for sustainability: ICT’s role in GHG reduction and biodiversity protection — Capacity building and training are critical for implementation
S86
Open Forum #34 How Do Technical Standards Shape Connectivity and Inclusion — Capacity building and translation between technical and policy communities is essential for effective multi-stakeholder …
S87
How Trust and Safety Drive Innovation and Sustainable Growth — Craig explains that Microsoft implements responsible AI governance programs internally and sees opportunities for differ…
S88
Leading tech companies commit to responsible development of AI at Seoul AI Summit — At an AI Seoul Summit 2024 meeting on Tuesday, sixteen companies leading the charge in artificial intelligence (AI) deve…
S89
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Giuseppe Claudio Cicu:So konnichiwa to everyone. And thank you, Mr. Thank you, Professor Bailey, for the introduction. I…
S90
Global Enterprises Show How to Scale Responsible AI — For POCs and experience and for some internal. internal use case. So let’s say if they’re doing some ask IT or other stu…
S91
Surveillance technology: Different levels of accountability | IGF 2023 Networking Session #186 — Investors can directly engage companies to improve their policies, practices, and governance. Investors have the potent…
S92
TradeTech for Greener Supply Chains — Governments are viewed as key actors in closing the gap between technology disruption and regulation, and there is a pos…
S93
https://dig.watch/event/india-ai-impact-summit-2026/regulating-open-data_-principles-challenges-and-opportunities — Thank you, Ms. Gose, for really giving that perspective. Now may I invite Mr. Cyril Shroff who is of course the convener…
S94
Decolonise Digital Rights: For a Globally Inclusive Future | IGF 2023 WS #64 — This involves taking into consideration factors such as cultural diversity, linguistic preferences, and social inclusion…
S95
Digital divides &amp; Inclusion — Collaboration between government entities, private sector organizations, civil society, and academia is deemed critical …
S96
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Addressing the digital language divide requires coordinated efforts from all sectors of society working together. No sin…
S97
WS #302 Upgrading Digital Governance at the Local Level — The second phase demonstrated practical results, with one pilot municipality (Reba) improving its score from 30% to 39% …
S98
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S99
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S100
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S101
The role of standards in shaping an AI-driven future — The tone is consistently formal, authoritative, and optimistic throughout. The speaker maintains a confident and promoti…
S102
Friday Opening Ceremony: Summit of the Future Action Days — The overall tone was inspirational, hopeful and energetic. Speakers aimed to motivate and empower youth attendees while …
S103
Open Forum #48 Implementation of the Global Digital Compact — The discussion maintained a constructive and collaborative tone throughout, with speakers demonstrating both urgency abo…
S104
WS #69 Beyond Tokenism Disability Inclusive Leadership in Ig — The discussion maintained a constructive and collaborative tone throughout, characterized by professional expertise and …
S105
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. Speakers were solu…
S106
AI and Data Driving India’s Energy Transformation for Climate Solutions — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s insights rather than…
S107
Session — Concluding sessions are pivotal for reflecting the outcomes of negotiations and the interests of different stakeholders,…
S108
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Overall Tone:The tone is consistently optimistic, inspirational, and forward-looking throughout the speech. The speaker …
S109
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — The tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists reveale…
S110
Closure of the session — Meaningful stakeholder participation The United States argues that the current modalities for stakeholder participation…
S111
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Hemant Taneja General Catalyst — The tone is consistently optimistic, inspirational, and forward-looking throughout the speech. The speaker maintains an …
S112
Internet standards and human rights | IGF 2023 WS #460 — Moderator – Sheetal Kumar:Hello, everyone. Good morning. Welcome to this session on Internet Standards and Human Rights….
S113
Day 0 Event #150 Digital Rights in Partnership Strategies for Impact — – **Peggy Hicks** – Works with the Office of the High Commissioner for Human Rights in Geneva Peggy Hicks: Great questi…
S114
UN Human Rights Council: High level discussion on AI and human rights — And its impact on society is accelerating. And we’re still only just starting to think about what that means. So I think…
S115
Software.gov — Bogdan-Martin advocates for the inclusion of citizens and private entities in government plans, emphasizing the importan…
S116
Protecting Democracy against Bots and Plots — They argue that this is necessary to maintain peace, justice, and strong institutions. Companies are also called upon to…
S117
Closing remarks – Charting the path forward — Bouverot argues for comprehensive inclusion in AI governance discussions, extending beyond just governmental participati…
S118
Unlocking Multistakeholder Cooperation within the UN System: Global Partnerships for Open Internet — Additionally, the guidelines outline a systematic process from initial stakeholder identification and engagement, throug…
S119
WS #82 A Global South perspective on AI governance — Jenny Domino: Yeah, of course. Thank you. Maybe I’ll just quickly comment on all the questions and comments. So on …
S120
WS #395 Applying International Law Principles in the Digital Space — Francisco Brito Cruz: Thank you. I hope you are all listening to me. Hello from Sao Paulo. I’m wanting to be with all of…
S121
Closing remarks — This comment provides the conceptual foundation for the standards discussion that follows. It explains why technical sta…
S122
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — This comment reframes the entire trust vs. innovation debate by rejecting the false dichotomy. It establishes that trust…
S123
Pre 7: Advancing Digital Inclusivity: UNESCO’s Measurement Approaches — This comment reframes multi-stakeholder engagement from a procedural requirement to a trust-building mechanism, introduc…
S124
WS #110 AI Innovation Responsible Development Ethical Imperatives — Godoi outlined UNESCO’s three-pronged approach: fostering opportunities through AI development, mitigating risks through…
S125
From principles to implementation – pathways forward — Gabriela Ramos:Well thank you, thank you so much, and thank you all for being here. I have to first and foremost say tha…
S126
UK Minister warns that NATO must adapt to AI threats — The UK government hasannouncedthe launch of a Laboratory for AI Security Research (LASR), an initiative to protect again…
S128
The Alan Turing Institute stresses AI’s vital role in UK national security — A recentreportfrom the Turing’s Centre for Emerging Technology and Security (CETaS), commissioned by the UK government, …
S129
WS #45 Fostering EthicsByDesign w DataGovernance &amp; Multistakeholder — 3. UNESCO’s readiness assessment methodology and ethical impact assessment framework Rosanna Fanni emphasized UNESCO’s …
S130
Networking Session #60 Risk &amp; impact assessment of AI on human rights &amp; democracy — Final MOOC planned to be held worldwide by early 2026 LG AI Research is collaborating with UNESCO to develop online edu…
S131
https://dig.watch/event/india-ai-impact-summit-2026/ai-that-empowers-safety-growth-and-social-inclusion-in-action-2 — And the course will be delivered on Coursera with a very clear goal, to make AI ethics learning accessible to a wide glo…
S132
Rights and Permissions — Tertiary systems have not remained impervious to these changing demands-general and vocational tracks often inte…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
P
Peggy Hicks
2 arguments169 words per minute2469 words876 seconds
Argument 1
Global norms and practical safeguards are essential to ensure AI works for all people, not just advanced economies. (Peggy Hicks)
EXPLANATION
Peggy emphasizes that AI must be governed by worldwide standards and concrete safeguards so that its benefits reach everyone, not only wealthy nations or dominant platforms. She links these safeguards to responsible AI governance and clear rules for both companies and governments.
EVIDENCE
She notes that practical safeguards are needed to make AI work for people beyond advanced economies and that alignment around global norms will help achieve this goal, citing her remarks about responsible AI governance and global norms [8-9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Peggy’s call for worldwide standards and concrete safeguards is documented in her remarks during the AI dialogue, emphasizing that AI must benefit everyone, not only advanced economies [S7] and her focus on responsible AI governance [S21].
MAJOR DISCUSSION POINT
Establishing Global Standards and Governance for Responsible AI
AGREED WITH
Tim Curtis, Rein Tammsaar, Alex Walden, Hector Duroir, Yuchil Kim
Argument 2
Rewarding companies that engage responsibly creates a “race to the top” and aligns market incentives with ethical outcomes. (Peggy Hicks)
EXPLANATION
Peggy argues that incentives should be designed so that companies that adopt responsible AI practices are rewarded, encouraging a competitive push toward higher standards. This creates a market‑driven “race to the top” for ethical AI.
EVIDENCE
She states that incentives for companies to be engaged responsibly should reward those companies, highlighting the need for such mechanisms [13].
MAJOR DISCUSSION POINT
Incentivizing Responsible AI through Capital and Market Mechanisms
AGREED WITH
Namit Agarwal, Alex Walden, Hector Duroir
DISAGREED WITH
Namit Agarwal
T
Tim Curtis
2 arguments158 words per minute740 words280 seconds
Argument 1
UNESCO’s AI ethics recommendations provide a shared foundation that can be translated into local realities through readiness assessments. (Tim Curtis)
EXPLANATION
Tim explains that UNESCO’s AI ethics recommendations offer a common baseline for AI development, which UNESCO is adapting to national contexts via Readiness Assessment Methodology Reports (RAMS). These assessments give countries a clear view of their AI landscape and guide responsible, human‑centred deployment.
EVIDENCE
He mentions UNESCO’s recommendation on AI ethics as a shared foundation and describes the RAMS reports launched in over 80 countries, including a recent assessment for India, which provide evidence-based diagnostics of regional AI landscapes [32-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tim explains that UNESCO’s AI ethics recommendations serve as a common baseline and that the RAMS reports have been launched in over 80 countries to adapt these norms locally, as highlighted in the UNESCO-sponsored event transcript [S11] and the broader UNESCO recommendation overview [S24].
MAJOR DISCUSSION POINT
Establishing Global Standards and Governance for Responsible AI
AGREED WITH
Peggy Hicks, Rein Tammsaar, Alex Walden, Hector Duroir, Yuchil Kim
DISAGREED WITH
Ankit Bose
Argument 2
A global MOOC on AI ethics will make “ethics‑by‑design” training accessible to a wide audience and support day‑to‑day implementation. (Tim Curtis)
EXPLANATION
Tim announces a massive open online course (MOOC) on AI ethics that will be delivered on Coursera, aiming to embed ethics‑by‑design into everyday AI work. The course will give learners practical tools to consider fairness, transparency, safety, accountability and inclusion early in the development cycle.
EVIDENCE
He describes the development of a global MOOC in partnership with LG AI Research, its delivery on Coursera, and its focus on ethics-by-design, practical tools, and day-to-day decision-making [37-44].
MAJOR DISCUSSION POINT
Capacity Building and Education to Embed AI Ethics
AGREED WITH
Yuchil Kim, Ankit Bose, Rein Tammsaar
R
Rein Tammsaar
2 arguments126 words per minute576 words273 seconds
Argument 1
Four priority areas: trustworthy AI, closing capacity gaps, cross‑border governance, and anchoring AI in human rights. (Rein Tammsaar)
EXPLANATION
Rein outlines the four key priorities identified by UN member states for AI governance: ensuring AI is safe and trustworthy, bridging capacity gaps in developing countries, creating interoperable cross‑border governance, and grounding AI in human‑rights law. These priorities guide the Global Dialogue on AI Governance.
EVIDENCE
He lists the four points-safe, secure, trustworthy AI; closing capacity gaps; cross-border governance and interoperability; and anchoring AI in human rights, including protection of vulnerable groups and bias mitigation [67-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tammsaar outlines these four priority areas-safe and trustworthy AI, capacity-gap closure, interoperable cross-border governance, and human-rights anchoring-in the AI dialogue session summary [S2].
MAJOR DISCUSSION POINT
Establishing Global Standards and Governance for Responsible AI
AGREED WITH
Tim Curtis, Yuchil Kim, Ankit Bose
Argument 2
Human‑rights‑based AI must protect vulnerable groups and address bias, requiring continuous dialogue with civil society. (Rein Tammsaar)
EXPLANATION
Rein stresses that AI systems must be anchored in human‑rights law to safeguard vulnerable populations, mitigate bias and discrimination, and ensure accountability. Ongoing engagement with civil society is essential to monitor and enforce these protections.
EVIDENCE
He notes that anchoring AI in human rights includes protecting vulnerable groups, addressing bias and discrimination, and ensuring oversight and accountability [73-75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to safeguard vulnerable populations and mitigate bias is reinforced by discussions on protecting marginalized groups such as LGBTQI+ individuals in AI systems, as noted in the human-rights impact session [S28].
MAJOR DISCUSSION POINT
Multi‑Stakeholder Engagement, Inclusion, and Localization
AGREED WITH
Alex Walden, Hector Duroir, Parvati Adani, Yuchil Kim
A
Alex Walden
4 arguments184 words per minute1023 words332 seconds
Argument 1
Adoption of the UN Guiding Principles and other international frameworks guides corporate AI policies and practices. (Alex Walden)
EXPLANATION
Alex states that Google incorporates the UN Guiding Principles on Business and Human Rights, along with OECD and UNESCO frameworks, into its AI policies. These international standards shape the company’s internal governance and operational guidelines.
EVIDENCE
He references using the UN Guiding Principles, OECD work, UNESCO recommendations, and engagement with peers through the BTEC project to inform internal AI governance and policies [138-140].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Alex describes Google’s integration of the UN Guiding Principles on Business and Human Rights, alongside OECD and UNESCO frameworks, into its AI governance policies, as recorded in the session transcript [S2].
MAJOR DISCUSSION POINT
Establishing Global Standards and Governance for Responsible AI
AGREED WITH
Peggy Hicks, Tim Curtis, Rein Tammsaar, Hector Duroir, Yuchil Kim
Argument 2
Google’s multilayered approach includes values‑based policies, model‑level requirements, executive review, and post‑launch monitoring. (Alex Walden)
EXPLANATION
Alex describes Google’s internal AI governance as a tiered system: company‑wide values and AI principles, granular model‑level testing, application‑level guardrails, executive risk reviews before launch, and continuous post‑launch monitoring to catch residual risks.
EVIDENCE
He details model-level requirements, application-level testing and guardrails, executive review of risks, and post-launch monitoring processes that ensure safety and trust throughout the product lifecycle [149-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The multi-tiered governance model-spanning corporate values, model-level testing, application-level guardrails, executive risk reviews, and continuous post-launch monitoring-is detailed in the AI dialogue discussion of Google’s internal processes [S2] and reinforced in the broader overview of Google’s layered approach [S1] and [S29].
MAJOR DISCUSSION POINT
Corporate Implementation Practices and Internal Governance
Argument 3
Market demand for trustworthy products pushes firms to embed safety, fairness, and accountability into their offerings. (Alex Walden)
EXPLANATION
Alex points out that Google’s business model creates a market incentive to deliver safe, trusted products because consumer trust drives usage of services like Search and Gmail. This market pressure motivates the company to embed ethical safeguards.
EVIDENCE
He explains that Google’s products are trusted by consumers and that there is a business reason to put out safe, trusted products, which drives internal processes [149-152].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Google’s business model creates a market incentive for safe, trusted products, driving the company to embed ethical safeguards, as highlighted in the session summary [S1].
MAJOR DISCUSSION POINT
Incentivizing Responsible AI through Capital and Market Mechanisms
AGREED WITH
Peggy Hicks, Namit Agarwal, Hector Duroir
Argument 4
Programmatic stakeholder engagement, trusted‑tester programs, and open‑source initiatives enable broader community input. (Alex Walden)
EXPLANATION
Alex outlines Google’s approach to external engagement: a systematic program for regular stakeholder dialogue, trusted‑tester programs that give third‑parties early access to test models, and open‑source projects like the Amplify Initiative that let communities help fine‑tune language models.
EVIDENCE
He mentions a programmatic engagement approach, trusted-tester programs for early access, the Impact Lab’s community research, and the open-source Amplify Initiative for language inclusion [301-310].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Alex outlines Google’s systematic stakeholder engagement strategy, including trusted-tester programs and open-source projects like the Amplify Initiative, to broaden community participation, as captured in the dialogue transcript [S2].
MAJOR DISCUSSION POINT
Multi‑Stakeholder Engagement, Inclusion, and Localization
AGREED WITH
Hector Duroir, Parvati Adani, Rein Tammsaar, Yuchil Kim
H
Hector Duroir
4 arguments150 words per minute891 words356 seconds
Argument 1
Microsoft aligns its Responsible AI standards with OECD and UNESCO principles and uses them to shape internal programs. (Hector Duroir)
EXPLANATION
Hector explains that Microsoft’s Responsible AI (RAI) standards are built on the OECD AI Principles and UNESCO’s AI ethics recommendation, providing a principled foundation for the company’s internal AI governance and product development.
EVIDENCE
He cites the influence of the OECD AI principles and UNESCO recommendation on Microsoft’s AI governance program and the creation of the RAI standard in 2019 [184-185] and earlier reference to the RAI standard [176-177].
MAJOR DISCUSSION POINT
Establishing Global Standards and Governance for Responsible AI
AGREED WITH
Peggy Hicks, Tim Curtis, Rein Tammsaar, Alex Walden, Yuchil Kim
Argument 2
Microsoft’s Office of Responsible AI and Sensitive Use‑Case program embed principles into product development and involve board‑level oversight. (Hector Duroir)
EXPLANATION
He describes the Office of Responsible AI, created in 2019, which runs a Sensitive Use‑Case program to triage high‑risk AI applications. When necessary, cases are escalated to the ITER committee, which includes senior leadership and board members, ensuring governance at the highest level.
EVIDENCE
He details the creation of the Office of Responsible AI, the Sensitive Use-Case team’s triage work, and escalation to the ITER committee involving the CTO and board level [176-182].
MAJOR DISCUSSION POINT
Corporate Implementation Practices and Internal Governance
Argument 3
Collaboration with NGOs creates community‑led benchmarks that capture cultural and linguistic nuances beyond English‑centric tools. (Hector Duroir)
EXPLANATION
Hector highlights a partnership with NGOs in India on the Samishka project, which develops community‑led safety benchmarks that reflect local cultural and linguistic contexts, addressing the limitation of English‑only safety tools.
EVIDENCE
He explains that Microsoft works with NGOs on the Samishka project to build community-led benchmarks, providing safety tools grounded in specific cultural and contextual aspects, especially for non-English languages [282-285].
MAJOR DISCUSSION POINT
Multi‑Stakeholder Engagement, Inclusion, and Localization
AGREED WITH
Alex Walden, Parvati Adani, Rein Tammsaar, Yuchil Kim
Argument 4
Voluntary commitments at international summits help translate regulatory expectations into concrete corporate actions. (Hector Duroir)
EXPLANATION
Hector notes that voluntary commitments signed at AI summits, such as those in Letchley Park (UK) and South Korea, have guided Microsoft’s model testing and risk‑management practices, turning high‑level expectations into operational triggers.
EVIDENCE
He references voluntary commitments from AI summits, including Letchley Park and the South Korea summit, which informed Microsoft’s testing approach for public safety and national security risks [188-192].
MAJOR DISCUSSION POINT
Incentivizing Responsible AI through Capital and Market Mechanisms
AGREED WITH
Peggy Hicks, Namit Agarwal, Alex Walden
DISAGREED WITH
Peggy Hicks
Y
Yuchil Kim
3 arguments146 words per minute272 words111 seconds
Argument 1
LG integrates UNESCO recommendations into its AI risk standards and publishes annual accountability reports to demonstrate compliance. (Yuchil Kim)
EXPLANATION
Yuchil states that LG incorporates UNESCO’s AI ethics recommendations into its internal risk standards and issues an annual accountability report to show how it meets those standards, thereby providing transparency and compliance evidence.
EVIDENCE
He mentions that LG aligns its AI risk standards with UNESCO recommendations, has created an AI-powered data compliance system, and released the third edition of an annual accountability report on AI ethics activities [209-213].
MAJOR DISCUSSION POINT
Establishing Global Standards and Governance for Responsible AI
AGREED WITH
Peggy Hicks, Tim Curtis, Rein Tammsaar, Alex Walden, Hector Duroir
Argument 2
Providing practitioners with concrete standards, risk tools, and transparent reporting bridges the gap between theory and practice. (Yuchil Kim)
EXPLANATION
Yuchil explains that LG focuses on giving practitioners clear standards, risk‑assessment tools, and transparent reporting to turn theoretical AI ethics guidance into actionable day‑to‑day practice.
EVIDENCE
He describes the MOOC’s role in bridging theory and practice, the provision of concrete risk standards, and the publication of an annual report to increase transparency [209-213].
MAJOR DISCUSSION POINT
Capacity Building and Education to Embed AI Ethics
AGREED WITH
Tim Curtis, Ankit Bose, Rein Tammsaar
Argument 3
LG’s annual AI ethics report and internal policies provide transparent guidance for developers and product teams. (Yuchil Kim)
EXPLANATION
Yuchil notes that LG’s yearly AI ethics accountability report, together with internal policies, offers developers clear guidance on responsible AI development and helps embed ethical considerations into product pipelines.
EVIDENCE
He references the annual accountability report on AI ethics activities and internal policies that support responsible AI implementation [209-213].
MAJOR DISCUSSION POINT
Corporate Implementation Practices and Internal Governance
AGREED WITH
Alex Walden, Hector Duroir, Parvati Adani, Rein Tammsaar
A
Ankit Bose
2 arguments179 words per minute758 words253 seconds
Argument 1
NASSCOM builds capacity across the Indian tech ecosystem—government, startups, SMEs—so they can adopt responsible AI early. (Ankit Bose)
EXPLANATION
Ankit describes NASSCOM’s four‑decade effort to develop the Indian tech sector, focusing on building capacity, creating open assets, and helping governments, startups, and SMEs adopt responsible AI practices from the outset.
EVIDENCE
He outlines NASSCOM’s mission since 2021 to address gaps in responsible AI, develop open assets, build capacity across government, startups, and SMEs, and promote early adoption of responsible AI governance for upside benefits [89-105].
MAJOR DISCUSSION POINT
Capacity Building and Education to Embed AI Ethics
AGREED WITH
Tim Curtis, Yuchil Kim, Rein Tammsaar
Argument 2
Internal silos (tech, business, legal, finance) hinder responsible AI; cross‑functional collaboration on use‑case basis is needed. (Ankit Bose)
EXPLANATION
Ankit points out that different functional groups within companies often work in isolation, which impedes responsible AI implementation. He advocates for collaborative, use‑case‑driven approaches that bring together tech, business, legal, and finance teams.
EVIDENCE
He describes how internal groups (tech, business, legal, finance) operate in silos, the need for collaboration on a use-case basis, and the challenge of translating frameworks into actionable steps for developers [250-267].
MAJOR DISCUSSION POINT
Corporate Implementation Practices and Internal Governance
DISAGREED WITH
Tim Curtis
N
Namit Agarwal
2 arguments175 words per minute681 words233 seconds
Argument 1
Investors and civil‑society actors should maintain ongoing dialogue with companies to accelerate responsible practices. (Namit Agarwal)
EXPLANATION
Namit stresses that continuous engagement between investors, civil‑society groups, and companies is crucial for speeding up the adoption of responsible AI. Dialogue helps bring fence‑sitters up to speed and ensures that best practices are shared.
EVIDENCE
He emphasizes the importance of ongoing engagement, noting that investors have already interacted with Google and Microsoft, and that dialogue should also target fence-sitters to accelerate responsible innovation [236-241].
MAJOR DISCUSSION POINT
Multi‑Stakeholder Engagement, Inclusion, and Localization
Argument 2
Capital can drive responsible innovation, but it must be tied to clear board‑level AI governance, executive incentives, and impact assessments. (Namit Agarwal)
EXPLANATION
Namit argues that while capital can incentivize responsible AI, it must be linked to concrete governance structures such as board responsibility, aligned executive incentives, and robust human‑rights impact assessments to be effective.
EVIDENCE
He notes that capital alone is insufficient, describing the need for board-level AI governance, executive incentives aligned with long-term risk management, and AI-specific impact assessments as catalytic for investors [226-232].
MAJOR DISCUSSION POINT
Incentivizing Responsible AI through Capital and Market Mechanisms
AGREED WITH
Peggy Hicks, Alex Walden, Hector Duroir
DISAGREED WITH
Peggy Hicks
P
Parvati Adani
2 arguments133 words per minute564 words253 seconds
Argument 1
Highlighting AI’s inability to self‑regulate underscores the need for human‑driven ethical guidance and education. (Parvati Adani)
EXPLANATION
Parvati reflects on an experiment where she asked an AI tool about its ethical limits and received no answer, illustrating that AI cannot self‑regulate and that human oversight and education are essential.
EVIDENCE
She recounts asking the AI whether it has ethical limits, receiving a response that it does not know and that no one can verify its conscience, highlighting the philosophical gap and the need for human-driven guidance [322-332].
MAJOR DISCUSSION POINT
Capacity Building and Education to Embed AI Ethics
Argument 2
Inclusive AI must reflect diverse languages, genders, and informal contexts; otherwise frameworks remain incomplete by design. (Parvati Adani)
EXPLANATION
Parvati argues that AI frameworks that ignore multilingualism, gender diversity, and informal contexts are inherently incomplete. She calls for interoperable, flexible systems that incorporate these dimensions to achieve truly inclusive AI.
EVIDENCE
She states that any framework lacking language, gender, and informal context considerations is incomplete by design, emphasizing the need for inclusive, interoperable systems [335-338].
MAJOR DISCUSSION POINT
Multi‑Stakeholder Engagement, Inclusion, and Localization
AGREED WITH
Alex Walden, Hector Duroir, Rein Tammsaar, Yuchil Kim
Agreements
Agreement Points
Global standards and practical safeguards are essential to ensure AI works for all people, not only advanced economies.
Speakers: Peggy Hicks, Tim Curtis, Rein Tammsaar, Alex Walden, Hector Duroir, Yuchil Kim
Global norms and practical safeguards are essential to ensure AI works for all people, not just advanced economies. (Peggy Hicks) UNESCO’s AI ethics recommendations provide a shared foundation that can be translated into local realities through readiness assessments. (Tim Curtis) Four priority areas: trustworthy AI, closing capacity gaps, cross‑border governance, and anchoring AI in human rights. (Rein Tammsaar) Adoption of the UN Guiding Principles and other international frameworks guides corporate AI policies and practices. (Alex Walden) Microsoft aligns its Responsible AI standards with OECD and UNESCO principles and uses them to shape internal programs. (Hector Duroir) LG integrates UNESCO recommendations into its AI risk standards and publishes annual accountability reports to demonstrate compliance. (Yuchil Kim)
All speakers stressed that worldwide norms – such as UNESCO recommendations, UN Guiding Principles, OECD AI principles and the UN Guiding Principles on Business and Human Rights – must be turned into concrete, practical safeguards so that AI benefits everyone, not just advanced economies [8-9][32-35][67-74][138-140][184-185][209-213].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the emphasis in the “Setting the Rules_ Global AI Standards for Growth and Governance” reports, which argue that standards should serve inclusive global equity and adaptable, process-oriented safeguards rather than favoring advanced economies [S48][S49][S50].
Incentivising responsible AI through market mechanisms and rewards creates a “race to the top”.
Speakers: Peggy Hicks, Namit Agarwal, Alex Walden, Hector Duroir
Rewarding companies that engage responsibly creates a “race to the top” and aligns market incentives with ethical outcomes. (Peggy Hicks) Capital can drive responsible innovation, but it must be tied to clear board‑level AI governance, executive incentives, and impact assessments. (Namit Agarwal) Market demand for trustworthy products pushes firms to embed safety, fairness, and accountability into their offerings. (Alex Walden) Voluntary commitments at international summits help translate regulatory expectations into concrete corporate actions. (Hector Duroir)
Speakers agreed that aligning financial incentives – rewarding responsible firms, linking capital to board-level AI oversight, responding to market demand for trustworthy products, and using voluntary summit commitments – can spur a “race to the top” for ethical AI [13][226-232][149-152][188-192].
Capacity building and education are needed to turn AI ethics theory into day‑to‑day practice.
Speakers: Tim Curtis, Yuchil Kim, Ankit Bose, Rein Tammsaar
A global MOOC on AI ethics will make “ethics‑by‑design” training accessible to a wide audience and support day‑to‑day implementation. (Tim Curtis) Providing practitioners with concrete standards, risk tools, and transparent reporting bridges the gap between theory and practice. (Yuchil Kim) NASSCOM builds capacity across the Indian tech ecosystem—government, startups, SMEs—so they can adopt responsible AI early. (Ankit Bose) Four priority areas: trustworthy AI, closing capacity gaps, cross‑border governance, and anchoring AI in human rights. (Rein Tammsaar)
All highlighted the need for concrete capacity-building measures – UNESCO’s RAMS assessments and a global MOOC, LG’s practitioner-focused standards, NASSCOM’s ecosystem-wide training, and the broader priority of closing capacity gaps – to make AI ethics actionable on the ground [32-35][37-44][209-213][89-105][68-70].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity-building is highlighted in policy dialogues such as the AI Governance Implementation and Capacity Building panel and the Embedding Human Rights in AI Standards session, both calling for sustained education programmes to translate ethics into operational practice [S52][S53].
Multi‑stakeholder engagement and inclusion of diverse languages, cultures and civil‑society perspectives are essential for trustworthy AI.
Speakers: Alex Walden, Hector Duroir, Parvati Adani, Rein Tammsaar, Yuchil Kim
Programmatic stakeholder engagement, trusted‑tester programs, and open‑source initiatives enable broader community input. (Alex Walden) Collaboration with NGOs creates community‑led benchmarks that capture cultural and linguistic nuances beyond English‑centric tools. (Hector Duroir) Inclusive AI must reflect diverse languages, genders, and informal contexts; otherwise frameworks remain incomplete by design. (Parvati Adani) Human‑rights‑based AI must protect vulnerable groups and address bias, requiring continuous dialogue with civil society. (Rein Tammsaar) LG’s annual AI ethics report and internal policies provide transparent guidance for developers and product teams. (Yuchil Kim)
Speakers converged on the importance of systematic engagement with civil society, NGOs, academia and language-diverse communities, and on publishing transparent guidance, to ensure AI systems respect cultural, linguistic and gender diversity and protect vulnerable groups [301-310][276-285][335-338][73-75][209-213].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder approaches are repeatedly endorsed in AI governance literature, including the “Who Watches the Watchers” summary, WSIS Action Line C10, and the Global Human Rights Approach, all stressing inclusive processes to build trust and legitimacy [S60][S72][S73][S75].
Similar Viewpoints
Both stress that global standards must be adapted to national contexts through tools such as readiness assessments to make AI beneficial for everyone [8-9][32-35].
Speakers: Peggy Hicks, Tim Curtis
Global norms and practical safeguards are essential to ensure AI works for all people, not just advanced economies. (Peggy Hicks) UNESCO’s AI ethics recommendations provide a shared foundation that can be translated into local realities through readiness assessments. (Tim Curtis)
Both companies embed internationally‑agreed principles (UN Guiding Principles, OECD, UNESCO) into their internal AI governance structures [138-140][184-185].
Speakers: Alex Walden, Hector Duroir
Adoption of the UN Guiding Principles and other international frameworks guides corporate AI policies and practices. (Alex Walden) Microsoft aligns its Responsible AI standards with OECD and UNESCO principles and uses them to shape internal programs. (Hector Duroir)
Both underline that effective governance (board‑level oversight, incentives) and ecosystem‑wide capacity building are needed to translate investment into responsible AI outcomes [226-232][89-105].
Speakers: Namit Agarwal, Ankit Bose
Capital can drive responsible innovation, but it must be tied to clear board‑level AI governance, executive incentives, and impact assessments. (Namit Agarwal) NASSCOM builds capacity across the Indian tech ecosystem—government, startups, SMEs—so they can adopt responsible AI early. (Ankit Bose)
Both stress that AI frameworks that ignore linguistic, gender and vulnerable‑group considerations are fundamentally incomplete and must involve civil‑society dialogue [335-338][73-75].
Speakers: Parvati Adani, Rein Tammsaar
Inclusive AI must reflect diverse languages, genders, and informal contexts; otherwise frameworks remain incomplete by design. (Parvati Adani) Human‑rights‑based AI must protect vulnerable groups and address bias, requiring continuous dialogue with civil society. (Rein Tammsaar)
Unexpected Consensus
Corporate leaders and civil‑society participants both highlighted language and cultural diversity as a core requirement for trustworthy AI.
Speakers: Alex Walden, Hector Duroir, Parvati Adani
Programmatic stakeholder engagement, trusted‑tester programs, and open‑source initiatives enable broader community input. (Alex Walden) Collaboration with NGOs creates community‑led benchmarks that capture cultural and linguistic nuances beyond English‑centric tools. (Hector Duroir) Inclusive AI must reflect diverse languages, genders, and informal contexts; otherwise frameworks remain incomplete by design. (Parvati Adani)
While corporate representatives usually focus on technical safeguards, they unexpectedly aligned with civil-society’s call for multilingual and culturally-aware AI, emphasizing community-led benchmarks and open-source tools as essential for inclusion [301-310][276-285][335-338].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of linguistic and cultural diversity is framed as a core principle in discussions on preserving culture and multilingualism, emphasizing AI systems must support diverse linguistic contexts to be trustworthy [S69][S70].
Overall Assessment

There is strong consensus that global norms, human‑rights‑based principles and practical safeguards are the foundation for responsible AI, that market incentives and board‑level governance can create a race to the top, that capacity‑building tools (MOOC, readiness assessments, ecosystem training) are needed to operationalise ethics, and that multi‑stakeholder, linguistically‑inclusive engagement is essential.

High consensus across UN, civil‑society and major tech firms on the need for standards, incentives, capacity building and inclusive stakeholder engagement. This convergence suggests a solid basis for coordinated policy actions, joint initiatives and the development of interoperable frameworks that can be scaled globally.

Differences
Different Viewpoints
Voluntary commitments vs formal regulatory safeguards
Speakers: Peggy Hicks, Hector Duroir
Responsible and effective AI governance and clarity of rules for both companies and government. and alignment around the global norms will help us to get to that point. (Peggy Hicks) Voluntary commitments at international summits help translate regulatory expectations into concrete corporate actions. (Hector Duroir)
Peggy stresses the need for clear, enforceable rules and practical safeguards backed by governments to make AI work for everyone, whereas Hector highlights the role of voluntary industry commitments signed at AI summits as the primary mechanism to operationalise standards, reflecting a split between mandatory regulation and voluntary self-regulation. [8-9][188-192]
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between self-regulation and statutory rules is documented in debates over professional standards, parliamentary sessions advocating layered approaches, and the voluntary commitments framework that positions corporate pledges as interim measures pending formal regulation [S51][S63][S64][S65][S66].
Proliferation of frameworks vs need for unified actionable guidance
Speakers: Ankit Bose, Tim Curtis
Internal silos (tech, business, legal, finance) hinder responsible AI; cross‑functional collaboration on use‑case basis is needed. (Ankit Bose) UNESCO’s AI ethics recommendations provide a shared foundation that can be translated into local realities through readiness assessments. (Tim Curtis)
Ankit argues that the multitude of national and sectoral AI frameworks creates confusion for developers, calling for concrete, actionable guidance, while Tim asserts that UNESCO’s global recommendations, operationalised via RAMS assessments and a MOOC, already supply a unified foundation for responsible AI, indicating a disagreement on whether existing global frameworks are sufficient or need simplification. [258-267][32-35][37-44]
POLICY CONTEXT (KNOWLEDGE BASE)
Concerns about an overabundance of AI governance frameworks versus the need for coherent, actionable guidance are articulated in the “AI That Empowers Safety Growth and Social Inclusion” discussion, which calls for differentiated yet coordinated approaches for various organisational contexts [S55].
Incentive design: race‑to‑the‑top rewards vs board‑level governance tied to capital
Speakers: Peggy Hicks, Namit Agarwal
Rewarding companies that engage responsibly creates a “race to the top” and aligns market incentives with ethical outcomes. (Peggy Hicks) Capital can drive responsible innovation, but it must be tied to clear board‑level AI governance, executive incentives, and impact assessments. (Namit Agarwal)
Both speakers agree incentives are needed, but Peggy focuses on broadly rewarding responsible companies to create a competitive “race to the top,” whereas Namit stresses that investment must be linked to specific governance structures-board responsibility, executive incentives, and human-rights impact assessments-showing a divergence on how incentives should be structured. [13][226-232]
Unexpected Differences
Philosophical limits of AI vs procedural corporate safeguards
Speakers: Parvati Adani, Alex Walden
Highlighting AI’s inability to self‑regulate underscores the need for human‑driven ethical guidance and education. (Parvati Adani) Google’s multilayered approach includes values‑based policies, model‑level requirements, executive review, and post‑launch monitoring. (Alex Walden)
Parvati brings a philosophical argument that AI lacks any intrinsic ethical conscience and therefore requires human oversight and education, whereas Alex concentrates on concrete corporate processes and tools to embed ethics, revealing an unexpected split between a fundamental philosophical stance and a pragmatic procedural approach. [322-332][149-162]
POLICY CONTEXT (KNOWLEDGE BASE)
The fundamental legal and philosophical challenges of AI, such as questions of ownership and creativity, are highlighted in the Secure Finance Risk-Based AI Policy for the Banking Sector, contrasting with more procedural corporate safeguard models [S59].
Overall Assessment

The panel displayed moderate disagreement centered on how best to translate global AI ethics standards into practice. Key tensions emerged between voluntary industry commitments versus formal regulatory safeguards, the overload of disparate frameworks versus the need for unified actionable guidance, and differing designs of incentive mechanisms (broad market rewards versus board‑level governance tied to capital). While participants shared common goals—responsible, human‑rights‑based AI and inclusive stakeholder engagement—their preferred pathways diverged, indicating that consensus on implementation strategies remains unsettled.

Moderate disagreement: participants largely agree on overarching objectives but differ on the mechanisms (regulatory vs voluntary, framework simplification, incentive architecture). This suggests that future policy work will need to reconcile these approaches to achieve coherent, scalable AI governance.

Partial Agreements
Both agree that AI must be governed responsibly and anchored in human rights, but Peggy emphasizes the need for global norms and practical safeguards, while Rein highlights specific priority areas (trust, capacity, interoperability, human‑rights anchoring) without detailing the mechanisms for safeguards. [8-9][67-74]
Speakers: Peggy Hicks, Rein Tammsaar
Responsible and effective AI governance and clarity of rules for both companies and government. (Peggy Hicks) Four priority areas: trustworthy AI, closing capacity gaps, cross‑border governance, and anchoring AI in human rights. (Rein Tammsaar)
Both stress the importance of external collaboration and stakeholder engagement, yet Alex focuses on internal programs (trusted testers, Impact Lab, Amplify Initiative) while Hector points to partnerships with NGOs for community‑led benchmarks, showing agreement on the goal of inclusive engagement but differing on the primary partnership models. [301-310][282-285]
Speakers: Alex Walden, Hector Duroir
Programmatic stakeholder engagement, trusted‑tester programs, and open‑source initiatives enable broader community input. (Alex Walden) Collaboration with NGOs creates community‑led benchmarks that capture cultural and linguistic nuances beyond English‑centric tools. (Hector Duroir)
Takeaways
Key takeaways
Global norms, standards and practical safeguards are essential to ensure AI benefits all people, not just advanced economies. Four priority areas identified by the UN Global Dialogue: trustworthy AI, closing capacity gaps, cross‑border governance, and anchoring AI in human rights. UNESCO’s AI ethics recommendations and the UN Guiding Principles on Business and Human Rights are being used as common foundations for corporate policies. Capacity‑building initiatives such as UNESCO’s global AI‑ethics MOOC and NASSCOM’s ecosystem‑wide programs aim to translate theory into day‑to‑day practice. Large tech firms (Google, Microsoft, LG) have multilayered internal governance models that combine values‑based policies, model‑level testing, executive oversight and post‑launch monitoring. Cross‑functional silos within companies hinder responsible AI; collaboration across tech, legal, finance and business units is needed. Multi‑stakeholder engagement—including civil society, academia, NGOs and investors—is critical for inclusive, culturally‑aware AI and for creating community‑led benchmarks. Incentives tied to capital (board‑level AI governance, executive incentives, impact assessments) can turn responsible AI into a “race to the top”. Voluntary commitments made at international summits provide a pragmatic bridge between regulation and industry action.
Resolutions and action items
Launch of a UNESCO‑partnered global MOOC on AI ethics (to be delivered on Coursera) – timeline: upcoming launch, with open invitation for learners and partners. Continuation of the BTEC convenings and the UN‑led Global Dialogue on AI Governance, with a flagship meeting scheduled for July in Geneva. Encourage companies to publish transparent annual AI‑ethics accountability reports (e.g., LG’s third edition, Microsoft’s Sensitive Use‑Case disclosures). Investors and civil‑society groups to adopt a three‑step engagement framework: board‑level AI oversight, product‑level governance checks, and robust human‑rights impact assessments. Develop and pilot community‑led benchmark datasets for safety and bias testing in non‑English contexts (e.g., Microsoft’s Samishka project in India). NASSCOM to expand capacity‑building workshops and open‑asset libraries for startups, SMEs and government agencies across India. All panelists agreed to share best‑practice case studies with the World Benchmarking Alliance for inclusion in future benchmarking reports.
Unresolved issues
How to harmonise the growing number of national and sectoral AI frameworks into a single, actionable roadmap for developers. Concrete mechanisms for financing the capacity gaps of developing‑country firms (infrastructure, compute, talent). Standardised methodology for AI‑specific human‑rights impact assessments that can be audited across industries. Scalable ways to bring small startups into formal responsible‑AI processes without over‑burdening them. Long‑term governance of post‑launch monitoring: who owns the data, how often updates are required, and how to enforce remediation. Ensuring that multilingual and informal language contexts are fully integrated into safety tools beyond ad‑hoc community projects.
Suggested compromises
Adopt a flexible, non‑prescriptive approach: the Global Dialogue will not impose a single model but will identify common ground and build on existing initiatives. Use voluntary commitments at international summits as a stepping‑stone toward formal regulation, allowing companies to demonstrate progress while regulators develop standards. Combine top‑down standards (UN, OECD, UNESCO) with bottom‑up, community‑led benchmarks to balance global consistency with local relevance. Encourage a programmatic stakeholder‑engagement process (regular dialogue) complemented by ad‑hoc consultations for specific product launches. Promote a “race to the top” incentive structure where companies that meet board‑level AI governance and impact‑assessment criteria receive market‑based rewards or preferential access to funding.
Thought Provoking Comments
Trust is not something technology earns through ambition alone but really it is earned through design choices, through safeguards and accountability.
Frames trust as a product of intentional design rather than a by‑product of innovation, shifting the conversation from abstract ethics to concrete engineering practices.
Set the tone for the rest of the session, prompting other speakers (e.g., Alex Walden, Hector Duroir) to describe concrete governance mechanisms and leading to the introduction of the UNESCO MOOC on ‘ethics by design’.
Speaker: Tim Curtis
We are developing a global massive open online course (MOOC) on the ethics of artificial intelligence… with a clear goal to make AI ethics learning accessible to a wide global audience and to make a practical… for day‑to‑day work.
Introduces a tangible, scalable solution that moves the discussion from policy rhetoric to capacity‑building action, addressing the gap between standards and implementation.
Created a new topic of discussion around education and skill‑building, which later resurfaced when Yuchil Kim and Alex Walden referenced training programs and the need for practical tools.
Speaker: Tim Curtis
Four priorities from member states: 1) safe, secure and trustworthy AI; 2) closing capacity gaps; 3) cross‑border governance and interoperability; 4) AI anchored in human rights and international law.
Synthesises the diverse concerns of UN member states into a clear, actionable framework, providing a shared reference point for the panel.
Guided the subsequent contributions, as speakers repeatedly mapped their own initiatives (e.g., Google’s model‑level requirements, Microsoft’s Sensitive Use Case program) onto these four pillars.
Speaker: Rein Tammsaar
Before any product goes to market, there are model‑level requirements, application‑level testing, executive review, and post‑launch monitoring.
Offers a concrete, multi‑layered process that illustrates how a large tech company operationalises responsible AI, moving the dialogue from abstract principles to real‑world workflow.
Prompted other participants (e.g., Hector Duroir, Namit Agarwal) to discuss similar governance structures and to compare accountability mechanisms across companies.
Speaker: Alex Walden
Our Sensitive Use Case program triages high‑risk AI applications, brings them to the ITER ethics committee, and involves the board at the CTO and senior‑leadership level.
Shows how Microsoft embeds ethical review into its organisational hierarchy, highlighting the importance of board‑level oversight.
Reinforced the theme of executive responsibility introduced by Alex, and later fed into Namit’s call for board‑level AI risk oversight.
Speaker: Hector Duroir
Only about 10 % of the 2,000 companies we assessed meet global governance expectations and none disclose human‑rights impact assessments.
Provides hard data that exposes a substantial compliance gap, challenging the assumption that most firms are already aligned with standards.
Shifted the tone from showcasing best practices to confronting systemic shortcomings, leading to a deeper discussion on investor leverage and concrete accountability measures.
Speaker: Namit Agarwal
Investors should first ask whether there is clear board‑level responsibility on AI risk, whether executive incentives are aligned with long‑term human‑rights risk mitigation, and whether governance applies across the full AI value chain.
Translates the earlier data gap into actionable questions for capital providers, linking finance directly to responsible AI governance.
Sparked a concrete set of recommendations that other panelists (e.g., Alex, Hector) referenced when describing their own internal oversight mechanisms.
Speaker: Namit Agarwal
When I asked the AI tool whether it has ethical limits, it replied ‘I don’t know.’ This highlighted that a system cannot understand its own responsibilities or consequences.
Introduces a philosophical and practical paradox—AI lacks self‑awareness of ethics—underscoring why human governance is indispensable.
Created a reflective pause in the discussion, reinforcing earlier points about human‑centred oversight and prompting participants to stress the need for external accountability mechanisms.
Speaker: Parvati Adani
We run programmatic stakeholder engagement, trusted‑tester programs, and the Impact Lab’s Amplify Initiative that lets communities fine‑tune language models, making the process open‑source and collaborative.
Highlights innovative, inclusive engagement models that go beyond internal checks, showing how companies can co‑create safeguards with civil society.
Extended the conversation on stakeholder involvement, linking back to Hector’s NGO collaborations and reinforcing the panel’s consensus on the necessity of multi‑stakeholder loops.
Speaker: Alex Walden
If we want to go fast, go alone. If we want to go far, go together – an African proverb reminding us that building a trustworthy AI ecosystem is a long‑term collective effort.
Summarises the collaborative ethos needed across industry, academia, and civil society, tying together the many initiatives mentioned earlier.
Served as a thematic bridge to the closing remarks, reinforcing the session’s call for sustained, joint action rather than isolated pilots.
Speaker: Yuchil Kim
Overall Assessment

The discussion was driven forward by a handful of pivotal remarks that moved the conversation from high‑level rhetoric to concrete, actionable pathways. Tim Curtis’s framing of trust as a design problem and the announcement of a global AI‑ethics MOOC introduced a practical solution that anchored later talks. Rein Tammsaar’s concise articulation of the four UN‑derived priorities gave the panel a shared roadmap, which each speaker then mapped their own initiatives onto. Alex Walden and Hector Duroir provided vivid, multi‑layered governance models that illustrated how large tech firms can operationalise those priorities. Namit Agarwal’s data‑driven critique exposed a systemic compliance gap and his investor‑focused recommendations turned the critique into a set of concrete levers for change. Parvati Adani’s philosophical probe of an AI’s self‑awareness reminded participants why human oversight remains essential. Together, these comments created turning points that shifted the tone from descriptive to prescriptive, deepened the analysis of accountability mechanisms, and reinforced the central message that responsible AI requires coordinated, multi‑stakeholder effort across standards, education, corporate governance, and capital markets.

Follow-up Questions
How does this work? How do you differentiate engagement across big companies, services firms, SMEs, and startups?
Understanding tailored engagement strategies is crucial for ensuring responsible AI practices across diverse organization sizes.
Speaker: Peggy Hicks (to Ankit Bose)
How have you been able to surmount challenges in getting human‑rights messages heard within the company?
Insights into internal advocacy help identify mechanisms to overcome resistance and embed ethics at scale.
Speaker: Peggy Hicks (to Alex Walden)
What are the drivers for external engagement across the sector and with governments?
Clarifying external collaboration incentives informs how companies can align with policy and civil‑society expectations.
Speaker: Peggy Hicks (to Hector Duroir)
What concrete suggestions from your work can push the discussion forward on incentivising responsible AI?
Seeking actionable steps for investors and other stakeholders to create market‑based incentives for good governance.
Speaker: Peggy Hicks (to Namit Agarwal)
From the NASSCOM perspective, how do you look at stakeholder engagement and internal silos?
Addressing internal coordination and the gap between frameworks and actionable steps is key for effective implementation.
Speaker: Peggy Hicks (to Ankit Bose)
Can you share quick comments on how your company is facing those challenges?
Request for concrete examples of how Microsoft tackles internal and external responsible‑AI challenges.
Speaker: Peggy Hicks (to Hector Duroir)
Could you share your perspective on stakeholder engagement and programmatic approaches?
Understanding Google’s methods (trusted tester programs, Impact Lab, open‑source initiatives) can guide best practices.
Speaker: Peggy Hicks (to Alex Walden)
How effective is UNESCO’s AI ethics MOOC in building global practitioner capacity?
Evaluating reach, uptake, and impact of the MOOC is needed to ensure it translates ethics‑by‑design into practice.
Speaker: Tim Curtis (implied)
How can multilingual safety evaluation tools be developed beyond English norms?
Ensuring AI safety across languages is essential for inclusion and accurate risk assessment in diverse contexts.
Speaker: Hector Duroir
How can the gap between high‑level AI frameworks and actionable implementation for technologists be bridged?
Practitioners need concrete guidance to move from policy to day‑to‑day development practices.
Speaker: Ankit Bose
What is the impact of board‑level AI governance and executive incentives on responsible AI outcomes?
Research is needed to link governance structures with measurable improvements in AI safety and rights‑respect.
Speaker: Namit Agarwal
How can AI‑specific human rights impact assessments be standardized and published meaningfully?
Current disclosures are scarce; developing robust assessment methodologies is critical for accountability.
Speaker: Namit Agarwal
What role can civil‑society‑led benchmarks (e.g., Samishka) play in creating community‑grounded safety tools?
Studying collaborative benchmark creation can improve contextual relevance of safety evaluations.
Speaker: Hector Duroir
What are the philosophical and ethical limits of AI systems regarding self‑awareness and conscience?
Exploring AI’s inability to understand its own ethical limits raises fundamental questions for governance.
Speaker: Parvati Adani
How can data be leveraged to create incentives that align capital with responsible AI governance?
Identifying data‑driven mechanisms can help tie investment decisions to AI risk management.
Speaker: Peggy Hicks (implied)
How effective are trusted tester programs and open‑source initiatives like Amplify in improving product safety and language inclusion?
Assessing these programs’ impact can inform broader adoption of collaborative safety testing.
Speaker: Alex Walden

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI 2.0 The Future of Learning in India

AI 2.0 The Future of Learning in India

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Speaker 1 outlining the joint CPRG-Future of Society initiative and announcing a new report on AI use in school education, following earlier reports on higher-education AI adoption [1-9].


Pranav Kothari presented survey results showing that roughly half of private-school students in Delhi regularly use generative-AI tools such as ChatGPT or Gemini, mainly for information search and writing assistance [25-27].


While students perceive AI as helpful for both school and entrance-exam preparation, they also report frequent hallucinations and lower accuracy for logical or numerical tasks, leading many to still prefer YouTube or ICT resources over AI [39-48].


Across the panel, speakers agreed that AI is viewed as a supplementary aid rather than a replacement for teachers, with a strong preference for human interaction and a warning that AI should augment rather than shortcut creativity [56-57][78-84].


Professor KK Aggarwal emphasized that AI adoption is outpacing earlier IT adoption and stressed the need to ensure AI supports creativity without diminishing thinking skills [78-84].


Speaker 2 highlighted AI as a 360-degree paradigm shift that will fossilise institutions that fail to adapt, arguing that higher-education excellence will determine national competitiveness in the coming decades [97-104][108-115].


Pankaj Sir described a technology-driven curriculum overhaul that revised 72 programmes without budgetary spend, positioning teachers as mentors and AI supervisors rather than masters, and warned of bias, hallucinations, and unequal access as key risks [170-176][178-184][204-207].


Patil Sir illustrated the rapid diffusion of AI tools-e.g., 7 crore Indian users of ChatGPT within months-and identified infrastructure gaps, low AI literacy among 1 crore teachers, and regional disparities as major implementation challenges [214-221][224-236][238-244].


He also noted that AI curricula are being introduced from third grade to demystify the technology, and that AI labs and centres of excellence at IITs are being leveraged to bridge the divide [236-244][250-257].


Aditi Nanda from Intel underscored industry-academia collaborations that provide AI-enabled devices capable of offline, multilingual tutoring, thereby addressing language barriers and reducing reliance on cloud connectivity [300-311][332-339][340-347].


Suresh Yadav called for a shift from a consumption-based to a creation-based education system, urging institutions to become problem-solving hubs and to integrate primary, secondary and higher education through technology [386-401].


Pankaj Sir further proposed an AI-oriented regulator that would conduct 70-80 % of assessments automatically and stressed the importance of embedding Indian knowledge and languages in AI systems to preserve cultural heritage [404-412][418-424].


Across the discussion, participants concurred that reimagining institutions requires coordinated policy, ethical AI use, and scalable digital infrastructure to realise the vision of a “Vixit Bharat 2047” where AI is an integral, not optional, component of learning [208-210][463-466].


Keypoints


Major discussion points


Key findings on AI use in Indian school-level education – AI tools are widely adopted (≈ 50 % of private-school students in Delhi) for tasks such as information search, writing assistance and concept learning, but students report accuracy problems, hallucinations and limited usefulness for calculations; they still prefer human teachers and view AI as a supplement rather than a replacement[24-28][31-34][39-48][50-57].


Widespread anxiety about AI’s impact on future skills and jobs – Participants note public fear that current skills may become obsolete and stress the need to understand emerging transformations, future-skill requirements and new job profiles before the next report on “Future of Jobs” is released[10-14][78-84].


Institutional response: re-imagining curricula, governance and equity – University and teacher-education leaders describe structural shifts (AI-driven curriculum design, AI-assisted assessment, moving from compliance to AI leadership) and stress the need to address digital divide, resource gaps, and ethical supervision of AI tools[156-176][180-190][192-199][200-208][214-224][230-239].


Industry and startup contributions to AI-enabled learning – Intel and ecosystem partners highlight localized AI solutions (offline translation, AI-PC devices, AI-powered tutoring bots) and collaborative programs that bring AI content in multiple Indian languages, aiming to improve learning outcomes especially in tier-2/3 and rural areas[290-304][311-322][329-338][339-345].


Vision for the future of higher education and regulation – Proposals include AI-driven assessment and accreditation, AI-oriented regulatory bodies, embedding Indian knowledge and languages in AI systems, and shifting from degree-centric to problem-solving, creativity-focused education models[404-424][387-395][376-383].


Overall purpose / goal


The session was convened to launch the CPRG “AI in School Education” report, present its empirical findings, and use those insights as a springboard for a broader dialogue on how AI is reshaping K-12 and higher education in India. The participants aimed to identify challenges, propose policy and institutional reforms, and mobilise government, academia and industry to collaboratively re-imagine education systems for an AI-driven future.


Tone of the discussion


The conversation begins with a formal, informational tone focused on report launch and data presentation. As the dialogue progresses, concerns about AI-induced disruption surface, creating a cautious and sometimes uneasy atmosphere. Mid-session, the tone shifts to constructive and optimistic, with panelists sharing visionary ideas, highlighting successes, and emphasizing collaboration across sectors. The closing remarks adopt an aspirational, forward-looking tone, emphasizing opportunity, ethical stewardship, and national ambition. Overall, the tone moves from reporting to reflective concern, then to collaborative optimism.


Speakers

Speaker 1 – Moderator/host of the session (role not specified in the transcript) [S4][S5][S6]


Speaker 2 – Senior commentator/moderator (name not provided), addressed as “Ramananji” in the transcript [S12][S13][S14]


Speaker 3 – Aditi Nanda, Director, Education and Industry, Intel (introduced by Speaker 1)


Professor KK Aggarwal – President, South Asian University; former Vice-Chancellor who developed Indraprastha University; expertise in IT and higher-education development [S15][S16]


Pranav Kothari – Researcher/presenter from the Centre of Policy Research and Governance (CPRG); expertise in AI adoption in education [S20][S21]


Pankaj Sir – Chairperson, National Council of Teacher Education (NCTE); former Head and Dean, University of Delhi; expertise in curriculum development and teacher education [S7][S8]


Patil Sir – Andrao B. Patil, Administration Secretary, School Education (also involved in higher-education initiatives); expertise in education administration and policy [S17][S18][S19]


Suresh Sir – Suresh Yadav, Executive Director, Commonwealth Secretariat; expertise in education policy and international cooperation [S1][S2][S3]


Additional speakers:


Ramanan (referred to as “Dr. Namanan” or “Ramananji”) – senior official addressed by Speaker 2; specific role not mentioned in the transcript.


Full session reportComprehensive analysis and detailed insights

1. Opening & context (Speaker 1) – Speaker 1 welcomed participants and introduced the joint CPRG-Future of Society initiative, noting the earlier AI-in-higher-education report and announcing the launch of a new AI-in-school-education study together with a forthcoming “Future of Jobs” report for the next month[1-9].


2. Survey findings (Pranav Kothari) – The Delhi-based survey covered private-school students; about 50 % reported using generative-AI tools (e.g., ChatGPT, Gemini) multiple times a week, a frequency slightly lower than among college students but still “relatively high”[24-27]. The dominant uses were quick information searches and writing assistance, while structured tasks such as calculations-especially in science streams-showed low accuracy and limited AI support[31-33][46-48]. Students found AI helpful for both school-exam and entrance-exam preparation[39-41], yet they frequently encountered “hallucinations” (incorrect or fabricated answers) and expressed doubts about AI reliability for logical or numerical subjects[45-48]. When compared with other learning resources, a clear majority still preferred YouTube videos or ICT-based tools over AI platforms[50-52], reinforcing the view that AI is presently a supplementary aid rather than a replacement for traditional teaching[56-58].


3. Panelist perspectives


Prof KK Aggarwal, President, South Asian University – Described the founding of Indraprastha University, the earlier IT wave, and emphasized that AI adoption is occurring even faster. He warned that AI must “supplement our creativity, not give us a shortcut” and responded to the future-of-higher-education prompt with the “tile” anecdote, illustrating the need for human-centred design[78-84][176-180].


Ramananji (Speaker 2) – Characterised AI as a “360 degree paradigm shift” that will “fossilise” institutions that fail to adapt, linked AI progress to national competitiveness, and cited the government’s effort to recreate Nalanda as a positive example[97-104][118-124][129-136]. He added that “it’s a technology war, an AI war; the nation that dominates AI will dominate the world for the next century”[97-101] and framed AI as “a tool for humanity” in the context of a 2050-2100 education vision[97-101].


Prof Pankaj Arora – Presented a technology-driven curriculum overhaul in which 72 programmes were revised via a dashboard system without additional budget[169-173]. He advocated for teachers to become mentors and AI supervisors, highlighted the “AI spine” metaphor, and proposed that an AI-oriented regulator could automate 70-80 % of assessments while retaining human oversight[176-180][404-408]. He also mentioned the NPST and NMM up-skilling programmes and warned of bias, hallucinations, and the digital divide as risks[182-184][204-208].


Mr Andrao B. Patil, Secretary, Higher Education – Noted the rapid diffusion of ChatGPT (≈5 crore users in ~40 days), describing it as a “quantum jump” compared with the 75-year spread of the telephone[214-218]. He highlighted infrastructure gaps: only ~4 lakh of 15 lakh schools have ICT labs and roughly 1 crore teachers (predominantly women) lack AI literacy[214-222]. He reported pilot AI curricula from Grade 3, the establishment of AI labs in villages, and the AI Centre of Excellence hosted by IIT Madras in partnership with Sarvam[236-244][250-257]. Patil warned that “AI will become a bane if misused or unethically used”[180-183] and stressed the need for AI literacy across both STEM and non-STEM subjects[224-231][236-239].


Ms Aditi Nanda (Intel) – Outlined industry-academia-government collaborations delivering AI-enabled devices that work offline and provide multilingual translation (e.g., Bhojpuri → English, Tamil → 11 Indian languages)[304-311]. She showcased on-device 24/7 tutoring bots that offer non-judgemental support, especially for introverted learners, and emphasized that local inference reduces hallucinations and eliminates dependence on cloud connectivity[345-356]. She also referenced the partnership with CBSE, the Unnati programme, and quoted Arthur C. Clarke on technology’s transformative power[332-339][340-347].


4. Q & A highlights – The discussion moved through several prompts (e.g., “tiles story”, future-of-higher-education scenarios). Aggarwal’s tile anecdote illustrated the importance of human-centred design; Ramananji reinforced the geopolitical urgency of AI; Arora and Patil debated the extent of AI automation in assessment; Nanda demonstrated practical offline solutions; and Suresh Yadav offered three concrete suggestions: (i) shift from a consumption to a creation nation, (ii) re-orient institutions from degree-centric to problem-solving models, and (iii) integrate primary, secondary and higher-education ecosystems[387-401]. Across the panel, speakers agreed that AI should augment, not replace, human instruction and that robust teacher mentorship and ethical supervision are essential[56-58][78-84][176-180][204-208].


5. Actionable items


– Launch and disseminate the AI-in-school-education report and the upcoming “Future of Jobs” study.


– Embed an AI curriculum from Grade 3 onward to teach concepts, benefits and risks.


– Roll out teacher-upskilling programmes (NPST, NMM) nationwide.


– Institutionalise dashboard-driven curriculum revision processes.


– Develop an AI-oriented regulatory body that can automate 70-80 % of assessments while ensuring human oversight[404-408].


– Expand AI Centres of Excellence in partnership with IITs, industry and government.


– Deploy offline, multilingual AI devices to reach Tier-2/3 and rural learners[332-339].


– Formulate ethical guidelines to prevent misuse, over-reliance and mental stress[236-244][250-257].


6. Unresolved challenges


Digital divide – Only 4 lakh of 15 lakh schools have adequate ICT infrastructure; ~1 crore teachers lack AI literacy[214-222].


Assessment standards – Need clear standards for AI-based evaluation, bias mitigation and fairness[180-183][204-208].


Financing – Sustainable funding models for under-resourced institutions remain unclear.


Governance – Policies governing AI-generated content, data privacy and the balance between automation and human interaction require further development[180-183][236-239].


7. Closing – The session progressed from presenting survey data on AI use in Delhi schools to articulating a nation-wide vision of re-imagining Indian education. Consensus emerged that AI must be integrated responsibly as a complementary, ethically supervised tool, with teachers remaining central as mentors and AI supervisors. Coordinated multi-stakeholder policy frameworks are needed to balance rapid adoption, equity, creativity and national ambition, aiming toward the aspirational “Vixit Bharat 2047” where AI is an inseparable component of learning[208-210][463-466].


Session transcriptComplete transcript of the session
Speaker 1

Thank you everyone for joining this session. Before we start the session, I would like to tell you about the joint initiative of CPRG and Future of Society. The Centre of Policy Research and Governance is a policy think tank that is continuously researching policy and governance issues in different fields. Two years ago, the Emerging Technology Centre was established by the International Cooperation Centre for the Development of Technology and the Relation of Technology and Society. We have developed a Future of Society project to study the relationship between technology a centre developed here. Under Future of Society, we are continuously working on the various sector, producing report, doing a lot of stakeholder consultation. In this light, just one year before, we have published one report usage of AI in higher education.

Now, we have just launched going to release one more report, usage of AI in school education. In next month, we are going again going to launch report, Future of Job. There is a lot of fear, and this fear is not just outside it is also coming in people’s minds. Whether their acquired skill will survive in the next 5 or 10 years or not, as emerging technologies are coming. Along with this, there is also a fear that it will not happen and the type of tool that is being developed, human skills or human mind will become irrelevant. By keeping all these things in mind, what kind of transformation is happening, what kind of future skills, what kind of future jobs are coming, and they are going to be transformed, we are going to launch a report on that.

But that is in the next month. But the report that we are going to launch now, that is AI in school education, and to launch that, I call all my guests and Mr. Pranav to the stage. Thank you.

Pranav Kothari

Now we have a short presentation with some salient findings from our study. So AI in school education, this is a survey report that we have conducted late last year as part of our ongoing internal activities on mapping AI usage among students in India and various sectors in India. So over the past year, CPRG has now released two reports on AI adoption in education. So last year, we released a report on AI adoption in higher education. This was the first ever survey -based report in India on mapping everyday AI use among college students. Today, now we are launching our new report on AI adoption in school education. Both studies have been conducted in Delhi, where we have actually gone to students, interviewed them to understand what are they using AI for, how often they are using AI for, and what are various challenges and opinion on usage of AI.

So firstly, if we just compare our broad findings, what we find is that AI use among school students remains relatively high, though marginally lower than what we found among college students within the same city because both studies were conducted in Delhi. Yet, what we find is that nearly 50 % of students, and these are, of course, these are students from private schools in Delhi, that was our limited sample, almost 50 % of them use AI based tools. These could be generative AI platforms or other AI tools multiple times a week. What are patterns of AI or edtech use as per academic stream? So what we’re finding is that while AI use, especially generative AI platforms such as strategy, GPT, Gemini remains relatively high.

What this is also leading to is also leading to some sort of a challenge to traditional methods of learning. And edtech platforms that have become extremely prominent and widely used over the past few years. Then what are students using AI for? So apart from asking how often are students using AI, we also try to delve into what are they using AI for and what we find in our study is that AI use is essentially concentrated for generally searching for new academy for academic information while studying or writing assistance and this of course varies across streams because some students may be more into more engaged in practice solving, question solving and AI use depends on depends on usage but however what we find is that among science students for instance while there’s high AI usage for learning concepts there is very limited usage for structured tasks like calculations or calculations or solving questions because that is where various AI platforms still have relatively low accuracy.

Now what is perceived helpfulness of AI for school exams and exams? So what is perceived helpfulness of AI for school exams and exams? So what is perceived school exams and exams? So what is perceived helpfulness of AI for school exams and exams? So what is perceived helpfulness of AI for school exams and exams? So what is perceived helpfulness of AI for school exams and exams? So what is perceived helpfulness of AI for school There is relatively high perceived helpfulness of AI platforms for both studying for school exams and entrance exams. While especially for entrance exams, students who are in the science team are more likely to prepare for entrance exams are still more dependent on offline classes or edtech platforms.

Yet the level at which we are seeing perceived AI helpfulness, it means that there is an emerging challenge that is coming to edtech platforms through free usage of generative AI platforms. AI support in learning and performance. So how do students rate AI -based platforms or AI -based tools in terms of their actual impact? And what we find is that apart from, of course, learning complex topics, improving their time management, there is a substantial proportion of students who are actually attributing, improving their academic performance to use of AI platforms. At the same time, students report issues with accuracy and challenges in AI use. One of the major challenges with respect to AI use is that students, a significant proportion of students regularly encounter AI hallucination or are able to identify that they are getting incorrect information.

Then secondly, as I mentioned, when it comes to accuracy for logical or numerical subjects, there is relatively lower reported accuracy. Again, this is something that various platforms are still working on in terms of trying to improve their performance and accuracy. Next is apart from their overall planning and understanding overall AI uses, we also try to compare AI platforms and their performance. with other tools. So what we did was we asked students, number one, is our AI platforms better than YouTube or ICT based learning? And there what we find is that there’s still overwhelming support for YouTube video or ICT based learning tools. Secondly, there’s a whole question of adaptive learning and AI addressing individual needs.

Here, there is an overwhelming evaluation by students that while AI might tools might be helpful, they are not necessarily providing solutions that are specific to their needs. And this, of course, might be because of the nature of AI tools that students are using, which is in most cases free models of generative AI platforms, as opposed to specific AI tools that are actually able to undertake adaptive learning. And then finally, we tried to ask the we tried to ask about AI versus human interaction. So why? So the idea of AI tutors or AI based learning tools replacing in -person teaching, there again, there’s an overwhelming support to the essentially overwhelming support for the idea that students still prefer.

traditional human interaction based learning. So what we’re finding in our study is that while AI is definitely emerging as inter -AI use is definitely increasing significantly among students, it is still considered as a supplementary tool as opposed to a replacement or substitute for traditional teaching. So these were some of the findings. We have more detailed findings in our report and at the end I would just like to thank our team that worked on this report. I would like to thank Nitin, Mehta and Ms. Suchitra Tripathi for their guidance and oversee of this research and I would like to thank our team members Gauri, Shreya, Anupriya, Rashi, Mika and Shugal for their active involvement and participation in the study.

Thank you so much.

Speaker 1

Thank you Pranav ji for the presentation. Today as a panelists now we have Professor KK Agarwal sir. President South Asian University We have Professor Pankaj Arora Sir Chairperson of National Council of Teacher Education Suresh Yadav Sir Executive Director, Commonwealth Secretary Andrao B. Patil Sir Adolescent Secretary, Higher Education And we have Aditi Nanda Director, Education and Industry, Intel And Agrawal Sir You have seen the transformation during IT movement And if I can align it correctly At that time you had developed Interpress University And maybe because at that time IT was also in the process of developing a new institution So you have seen the transformation during IT movement So when you are developing an institution At that time it must be happening in your mind For the how you know i .t is going to challenge those you know kind of uh traditional or conservative approach of you know institutions now again you are the president of south asian university it’s you know one of the iconic institution in india and again you are facing new challenge you know from the ai so how you are you know how you are finding this ai is different from the i you know past i .t because in your lifetime you have seen two movement first i .t now ai and at the same time you are developing two new institutions because before you saw was you not in that position but now so is leading so how you are finding

Professor KK Aggarwal

thank you for the question yes in a way when i was asked to develop the very first university Delhi, Indraprastha University, and it was a challenge because it was the first university in the country, and your very right IT movement was also in the offing. It probably happened by coincidence that the vice -chancellor, which is me, which was appointed at that time, belonged to the discipline of IT. This was probably never a calculation, but it happened. But it happened for the good of the country and the university, I believe, because you could get two in one kind of person to develop. So we made sure that right from the beginning, IT is… That was the time when, if you remember, I saw the students in Delhi.

Incidentally, I think this was the first university in Delhi for the students after Delhi University, who was an affiliated university. So I was seeing the student go to the Delhi University colleges. They had not said this before. with the employment and in the evening they go to a tech company and do a course there. Now that was very much disturbing to me why the students should feel not very satisfied at the end of the formal school or formal college and then try to do that. So my first thing was let’s combine the two. So our curriculum itself should integrate both. If the students have a job in IT sector, why should we not realize this and make sure that every subject is more IT saving and so on and so forth.

Now when I am here the challenge obviously as you say is AI. AI is fortunately being adopted by the youngsters even faster which was expected. IT was also adopted by them faster than the elders. AI is being adopted much faster than elders. Only thing which one has to see is as I said in the whole process of using AI let’s make sure it supplements our creativity it does not give us a shortcut to creativity and thereby reduce our thinking powers. That is a challenge which we have to face in academics. Short of that it is a good opportunity for all of us.

Speaker 1

While working with President Mukherjee you have introduced a lot of technological tools and a lot of innovation not only in the field of finance ministry but as an advisor of President you have introduced a lot of educational innovation as well. And I think that was before time of 2014 and 2015. After the COVID -19 the educational institution has been changed and it is getting changed very fast. How you will analyze and how we will assess this kind of change and what will you suggest to education institution and to the head of the institution to address those challenges posed by AI and other emerging technology.

Speaker 2

Thank you very much and first of all a big congratulations on this fantastic report which talks about the AI in school education and also your previous reports which talks about AI and I think it’s a very good documentation to understand where we stand as a society as a country, as an institution in the emerging landscape. COVID changed Ramananji drastically the way the world looks at the various ways of doing the things. I mean, going to the office was normal. Now, not going to the office is normal. So there is a fundamental shift. It’s very difficult to get the people back to office. And the argument is that if I can do my job better while sitting in my home, why do you want me to come to the office?

So these are the fundamental shifts which we have witnessed post -COVID. And then if you look at the artificial intelligence, it’s a paradigm shift. It’s not only 180 degree. It’s a 360 degree shift. We don’t know which direction and what direction we are going. Any organization, any society, any institution which is not live and kicking to this new emerging reality will be fossilized. Remember, we have in 180 controlling. The almost one -third GDP of the world. And it was not the country which was leading. It was the institutions. It was the institutions of that time which were producing the skill, which can produce the goods and services and the material, which can dominate the world. So it was the role of the institutions.

Of course, the government has now tried to recreate Nalanda, which is coming out very well. So the point I’m trying to emphasize is that the role of educational institutions is of paramount importance. No institutions can dominate the world. No country can dominate the world. Unless the institutions dominate the world. If you look today, the U .S. is dominating the world not because of the military power, but because of the higher education system. If you look at China, the Chinese universities are coming on the top. The number of research in the field of computer science, AI, machine learning, computer vision is dwarfing the research being done in the United States now. So that’s the level of the ship.

So when I’m talking about your topic. reimagining the education system in India, I’m not talking of today, I’m talking of India of 2050, India of 2100. And one thing I keep saying that India, a lot of people say it’s a $5 trillion economy. They are very happy that we are the third largest in PPP, fourth largest in the other term. But I’m not happy because India, as of now, of 1 .5 billion people, if you look at the European standard of GDP per capita, we should be more than $70 trillion. If you look at the American standards of GDP, we should be more than $150 trillion, more than the size of the world economy. So that is the level, that is where we have to think that what kind of institutions we need, what kind of infrastructure we need, what kind of history we need.

Is it the degree, the undergrad degree, master’s degree, PhD’s degree? I got all the degrees. I studied in India from IIT, Indian School of Business. I studied in US, UK, Germany, Sweden, everywhere I have just to educate myself that how the things are different, what are the fundamental differences. So that is something which we have to realize and not do the reforms. This is not the time for doing the reforms in the higher education system. It’s like reimagining. You see, what we reimagine India in terms of digital India, we are getting the dividend. We are a country which is entirely on different level, generating billions of transactions on the digital UPI system, which was unheard.

So similarly, we need a higher education system. We need a general education system which can give an exponential bump to India’s story. And that’s not going to be the normal system. It’s going to be something very, very different. And that is going to be based on the foundation of the technologies. We have been talking that this is the first time in the history of India, though it has been tried several times in the past, to link the north and south. Language barriers always exist. It’s very difficult to do it. but AI dismantles the barrier I was in my village we set up AI lab, we set up AI shop and my message to the villagers you can speak in your Bhojpuri to US, to Russia to Japan so that is the first time a fundamental shift in connectivity is happening around the world and India being a young nation a country of young people almost 44 million students in the higher education ecosystem almost running parallel to China we have that power and potential to change and the moment we are able to use this technology I am sure that we will realize the potential so I say in terms of potential I say I am number one economy India is number one economy not third or fourth so that is the mindset because I have to reach to my potential and I will reach the potential only when I know my potential what it exists so there is a huge responsibility of the Indians of the present generation not only for themselves but the Indians of 2100 Indians of 2050 And if we are not able to capitalize, this AI boom will be left behind.

If you see the geopolitics around the world, we say it’s a new war and all, but it’s the technology war. It’s the AI war. Countries are understanding that those who will dominate AI, they will dominate the world for the next century. So we have to love it. We have no option as a nation. And the education system, which is one of the biggest in the world, will have a very catalytic role in realizing that dream of India of 2100. Thank you and over to you, Ramananji.

Speaker 1

Pankaj sir, as a head and dean, you have changed the curriculum of University of Delhi. You have also introduced a lot. You have introduced a lot of skill -based course during your time and make it outcome -oriented. But the AI challenge is new. And now as a chairperson of NCT, you also see a lot of diversity among the institutions, from the Jhabua to Delhi, and, you know, it’s a multi -layer system. And as a chairperson of NCT, how will you introduce, kind of, you know, ensure that all the institutions can respond in the same manner to the challenge of AI? Because there is a lot of diversity in India. And there is a lot of diversity, you know, about having those kind of resources.

Because AI also needs a lot of resources, not only in a financial term, but in the term of technology and kind of having electricity and other things. So how do you see and how will you ensure?

Pankaj Sir

then we must say that structural and epistemic shift is not merely technological. It is fundamental change that how knowledge is produced, assessed, and evaluated in the day -to -day life of a student. If we look at teacher education, yes, in CI, during my headship, we brought new programs. We revised all the curriculums of BH, MAD, ITEP. And during those changes, our focus was to meet the expectations of young learners in 21st century. Young learner is into technology throughout. When I was doing my college, those days, computers came to the world. And we were very scared of computers. We were told that unemployment will increase because one computer will be… work in place of four or five people.

So as a young student, we protested. against this technology. But today, reality is different. Computer is giving us multiple new avenues of employment in our daily life. So when you revise curriculum, two things I would like to mention here. One, curriculum revision exercise at University of Delhi took place in 2019. And this entire exercise was techno -based. We did it through dashboard system without human intervention, intervention, without having formal meetings and budget of lakhs of rupees to meet, to eat, to TA, to DA, and everything. So zero penny was spent when 72 programs were revised for LOCF curriculum framework. And then in CI, when we took up this exercise, again I followed the same model. Techno -oriented, technology -supported revision took place.

In a record period of two months, we revised almost all the courses in education at University of Delhi. now if we look at role of a teacher what type of teacher we need to meet future generation in my family I have teachers who are dealing with class 3 students class 7 students and senior secondary classes as well as university teaching they all are saying AI is posing a threat to cognitive development of the learner yes it is posing a threat but at the same time we must realize that AI is not going to replace teachers teachers are always there and here I say they both complement each other no challenge no competition between two they complement because a teacher after the use of AI based technology or video or some other context a teacher is the person who can create sensitivity sensitivity in the class related to the topic as well as allow diverse opinions on the same topic So AI can assist.

AI cannot be a master. It is an assistant. If we use it for ethical reasoning, if we use it for creativity, collaboration, adaptability, I see teachers will increasingly function as mentors and learning designers, not learning followers. And ethical guides and facilitators of inquiry in a classroom situation as well as in writing textbooks and developing curriculum. AI -based output demands AI supervision. AI supervision, I mean, AI cannot be left free to design any curriculum. We need to supervise it. When I say we all know difference between governance and leadership. Governance, I call, like, governance means compliance manager. If whatever is coming to you, you are implementing it, you know. Organization, whether it is a college, university, or any other organization.

And if you are an academic leader, then you make a change in that compliance. Compliance will take place because governance is essential. But at the same time, you bring change according to the needs of your institution, needs of your students, needs of your financial resources, etc. Similarly, in education, we must not become AI followers. We should become AI leaders for the time. Yesterday, Honorable Prime Minister said we have tremendous potential to become AI leaders for the world. In those lines, as NCT Chairman, we have brought two new programs, NPST, National Professional Standards for Teachers, and NMM, National Mentoring Mission. Both are designed on a digital platform, on a digital world. And AI is helping us analyzing people’s queries, their questions, their anxiety, and helping them to identify the right mentor for them.

And mentor -mentee is always a guru -shishya context, which is very meaningful and useful. I will close this remark by saying, now we are moving away from treating technology as one -off workshop. Rather than, we should shift towards multi -semester AI spine. AI is spine of entire education system nowadays. And our new program, ITEP, have multiple contexts of AI -based technology. We must transit from product -only evaluation to process -rich evidence of learning. That is more meaningful. In 2012, CBSE brought continuous comprehensive evaluation. Now, AI is helping us to go for process -rich evidence in learning. Risk landscape is there. Bias, heliconations are there. But uneven access to technology is also a challenge that should be taken into consideration.

My last closing remark is, AI plus education can take us towards Vixit Bharat 2047. AI is not a choice. It is a part of our life and providing us multiple new methods of research, new methods of industrial internship. But education, which is providing culture, language and humanistic approach, both need to work hand in hand for better future, for Vixit Bharat 2047. Thank you.

Speaker 1

Patil sir, as an administration secretary, school education, you have embedded technology and through technology, you have been in our track, not only Nipun, but other platforms have been transformed so much that the focus of the government on learning outcomes has improved a lot. Now, you are in higher education and higher education is a very diverse sector and the same time you know in contrast to school education higher education may up to pass controlling power be a critical to jada hoti school education is subject to some time you know in contract list so that’s why abhi aapka kya vision hai to you know to transform those higher education institution in the age of AI AI kya ek challenge hai lagataar aa raha hai not only for the students but as well administrator as well aur us time mein aap ki planning kar rahe ho ki how will you you know address those issues

Patil Sir

thank you sir thank you so much for giving me the opportunity I would like to ask few of the I’m seeing a lot of students here so can somebody tell me that how much time telephone took to reach to five growth subscriber our users any guesses 30 years good guess anybody else quickly 50 years okay good some more yes here somebody sitting right of the stable 75 years yes so it took five crore people go your telephone my light it took 75 years it took 38 years to reach this radio took 38 years to reach to 5 crore people our charge gpt any guesses germany took for 60 days to do is to the 5 crore people whereas charge a pity to 40 days to reach to 5 crore people so this is the i think there is a quantum jump or whatever you say it is a huge jump and with this it is a big challenge for the educationists and both school and higher education.

I can just read some figures for benefit of you that in world we are having around say mobile users in the world there are 749 crore people whereas India 120 crore people. Internet 600 crore people they are using it in India it is 100 crore. In Google world 580 people 580 crore people are using Google whereas in India it is 80 crore and CharGPT world it is 80 crore this is last month’s data not this month. So around 7 crore people they are using CharGPT in India and 1 crore in Gemini. So around maybe by this time 10 crore people will be using CharGPT and Gemini here. Now the challenges what are coming up I will come to that I am not pessimistic at all but if you see in the education ecosystem as Suresh sir also has told and other speakers also have told.

This is very important to see how what is the this cohort, around 25 crore children are in the school education and 4 .6 crore children are in the higher education around 30 crore we can say now 15 lakh schools are there in India and right now if you see the infrastructure around 4 crore 4 lakh schools only having the computers ICT labs and tablets and other things so it is a huge challenge to take the AI revolution to last mile which is, we are aware as I also told you I worked in school education, now in higher education so we are having integrated approach and we are working on that but we need your help second one if you see in school education around 1 crore teachers are there right now and most of them are women so which is really good change is happening there but how many are AI savvy or AI literate we are working on that and And Sir NCT Chairman Sir has already told on that.

Pankaj Sir has told on that. Now, coming to the different digital divide. Delhi schools, if you say, and the remote area schools, the tribal areas or rural areas, you can see. Madam is also from Bangalore. I last week went there. There is huge development. So the cities, the way they are catching up here is huge. Humongous progress is there. But the rural area and other places, it is a big challenge. Central schools like KVS, NVS, they are doing really good in catching up with the AI, using the AI technologies. Even CBS is coming with the AI curriculum. Whereas in the report also I’ve seen, like Andhra, Assam, Tamil Nadu, and a few other states are using the AI curriculum and AI tools for implementation in the education system.

Whereas other states are here to catch up. so there is little bit divide in this and it will take time for India to catch up but yes all of us are now agreed that yes AI is not going anywhere, AI has to be used, AI is useful and at the same time AI is not enough we should treat AI as a machine not as a human being which is very very important AI if you started taking as a human being then it will be problem, it will be huge mental stress on the students and other users also so we are aware of this that’s why school education has taken very wise decision to introduce AI curriculum in third grade it is not to teach the AI it is to teach what is AI what are the uses of AI and whether it is good or bad so children should know about it which is very very important so coming generation, coming up generation new generation, young generation must learn AI because it is very very useful.

Yesterday as Pankaj sir has told that Prime Minister has told that AI, India has to become hub of AI and yesterday evening, yesterday full day we had the meeting with Spain universities. Today again we are having the meeting with the Spain universities like that lot of meetings are going on MOAs are happening. You may be knowing that IIT Madras has developed one tool where Dr. Kamakoti has spoken in Tamil and it has been translated in 11 languages of India as Suresh sir was also telling that when you speak in Bhojpuri, it can get translated in others. So there is huge potential I have seen from Siksha Lokam, they have shown me that again in Bihar, the villagers, the women, they are talking about dropouts, why I got dropout, why my daughter is getting dropout, what are the issues, they are talking in the local language and AI is actually summarizing in English and other languages.

so they are talking and with it that there is no typing nothing else it is getting summarized classified and as an administrator we can take decisions so AI is a boon if we are using it very properly and AI will become a bane if it is misused or unethically used. As sir you are asking me for the challenges in AI yes there are many challenges what we are doing right now is updating the curriculum we are doing educational governance such as coming up many IITs they brought AI schools in their campuses they are having MOUs with Google, Microsoft and various other places Wadhani Foundation has also started one AI school in one of the IITs.

Lot of investment is going on. We are already started AI COE in education and IIT Madras is hosting that. Lot of work going going on. Lot of work going on. Lot of work going on. Lot of work going going on. Lot of work going on. Lot of work going on. Lot of work going on. Sarvam is also helping us in those initiatives. but yes there is parity there is disparity we need to sort out those issues and AI is not only for the STEM that we understood and we are implementing that way everybody has to understand what is AI and how we can take it forward as Suresh has told about economy I think we both have worked previously in Ministry of Education Ministry of Finance together I got his guidance there so the way he has told you can see it is now we are talking about reimagining the education so whatever you imagine what is your vision you are going to achieve that so we should not limit our vision I think 140 crore population and plus it is coming up it is required to have really big vision but same time necessary skills skills are required and one of the report suggests that if one year of schooling is happening …

the 24 % there is output increase in the labour output actually. Labour can, the output will increase by 24%. And in India we are having these certain issues. If you see what labour force is giving the output in US, what is given in South Africa and what is given in India, there is, really we need to think about it. So year of schooling is very, very important. We are having challenges of dropouts also. Luckily, Vidya Samhita Kendra and other tools we are using to trace the dropouts and bring them in the mainstreaming. You can also see around 5 crore children are dropped out. And various state governments are working on that to bring it down. So European Union, few countries may be having this population of 5 crore.

So challenges in India are more, but much more. But as Madam was also asking me what will be the impact of AI, I think it will be huge impact on us. Next two years we can see the way India is going to change. As again I can say one last example and come back. When I was working in banking, department people said that there is something called payment through the mobiles. And when I was discussing with our CMDs of the banks, those were their CMDs, now it is MDs. And they told me that no, it is not going to work here. And South Africa started there. Airtel itself started it there. And 2016 when DMO has come, we can see the huge impact.

And now in PCI we can see the way it is happening. Around 50 % of digital transactions are happening from India, world’s transactions. There is huge change. I think another two years we can see there is huge change in AI adaptability and using it. But one caution is that AI has to be used as a tool. It has to be used ethically. And it has to be used for the work. For humanity. That is what I can say. Thank you so much. And we are getting prepared for that, sir. As IITs are far better. IIMs are far better. Whereas central universities are catching up with this AI. And we are trying to help with them, sir. Thank you.

Thank you, sir. Thank you, sir. Thank you.

Speaker 3

Thank you, Dr. Namanan, and thank you for having me here. It’s been very interesting and it’s been a pleasure for me to listen to all the other panelists here. Got to learn quite a lot. And congratulations on the report. So very interesting and very pertinent point that you raised, that the industry also needs to work with different players, not just with the government but also academia, and create a change. So I have a very interesting job. I work with the ecosystem and industry. industry. And in that, I get to work with different startups, get to know different ISVs and really see the innovation that’s happening. And some of these innovations are interesting to see because they are cutting edge.

They are coming from India for India and then they go for the world. Like you just mentioned, sir, Patil sir was just talking about, you know, the digital payment. And I think you were mentioning M -Pesa from a net perspective. So how we have taken the UPI and other things that we are taking this to the world. It’s a very proud moment, but it starts with an idea and it starts with something that needs to be nurtured by everyone. If you have and that’s what the AI summit, it’s a great moment for all of us. We’ve put ourselves on the world map. We’ve shown the world that we can do great. And here is where the technology innovation is happening.

And from an Intel perspective, we work not just very closely with higher ed, but also K -12 and of late, we’ve been working with. startups to come up with solutions which impact the students at large. So I was talking to somebody the other day, and I think the stage server was talking about, you know, bhojpuri getting translated. So I was talking to somebody and said, why are learning outcomes in the Indian Tier 2, Tier 3 and rural areas not as great? You know, the response came ki, bache ko maths or physics nahi samajh mein aata, yeh problem nahi hai. Bache ko English nahi samajh mein aata, yeh problem hai. Kyunki hamara teaching medium, o bache ke language mein nahi hai.

And what we are doing today in terms of making sure that the content reaches everybody in the language that they understand, I think that is going to be a game changer. And that is coming from AI, and AI is coming from a combination of people. Folks like all of us in the room coming together and saying, okay, let’s make something that will have an impact at population at large. So those are things and I was talking to you just before this. He said, India mein aisa nahi hai ki people don’t want to buy technology. People don’t, they’re not afraid of technology. But the problem is and how many of us as parents will always say, laptop nahi, bachcha ko laptop nahi dana, bachcha bighar jayega.

But why are we not seeing the value? Why are we not seeing that a creation device like a laptop or something that is more than a consumption device, where is the value creation in that? Can we have AI courses, courses starting from class 3 onwards, going up to higher ed? And we have in fact worked, my colleague of mine has worked very closely with CBSC to create a curriculum which has gone into schools, right? And we’ve worked, Intel has worked together and helped put that together. We have a program called Unnati for higher ed. And now we are bringing in these… courses which are AI for future workforce under that umbrella which has courses like AI in manufacturing and we have put this out in Gujarat Technical University and recently we had somebody come in from there.

This girl was the first time, first generation to go to a college she went through this program and in this program we also had internship. So she had interned with a startup with an industry in Surat that was doing basically textile manufacturing and she created a project on defect detection using AI. So a kid from a rural area going to college for the first time as the first generation going to college being so confident about what she had created because it was being used in an industry and she could see the impact. I mean those are the stories and those are the things that make you feel like you want to work in this. The rewards are huge.

I think that is what is needed and Intel’s obviously a great job of bringing these things together and all the programs that we have, whether it’s Unnati, whether it’s Future for Workforce, whether it’s the stuff that we do in the K -12 space. We’ve got an ISV startup that we work with which is helping teachers become AI -enabled. So creating, and it’s all running locally. The content doesn’t even need to go into the cloud. We have solutions running on AI PC, which is what Intel is now bringing to the market. And I would invite you all to please come visit our booth at, of course, AI Summit, because that’s what has brought us all here.

And we’ll show you some of the really cool use cases and demos where voice -to -voice gets translated on the device. So you don’t even need to connect to the Internet. You don’t even need to connect to the cloud. Everything is happening on the device. The content is there. And I think I heard hallucination as one problem. that is what you also in the report identified. What if the content sits locally on the device itself? So you’re only looking at class 9 science. So when a child asks about a question, maybe they’re just wanting to know how do I get into NEET and JEE, the answer is coming from there, and it’s coming in a language that the child understands.

So what if that happens, and that exists today. We’ve worked on it. So think of it as a 24 -7 tutor. And one more thing, I don’t know how many of you will relate to this, but at least I used to. When the teacher is teaching, everything was clear. But when you go home and read the same concept, what happened? How did it disappear? So when this happens, and if you’re an introverted child, who do you go and ask? And how do you create that safe space of asking? You can have tuition teachers, you can have personalizers, but if there is a bot, that is not judging this child and is saying, hey, come here, I’ll teach you in the language you understand.

Mere se pucho. And you know as a parent that this is all happening on the PC. It is all safeguarded. Or at least there is lesser chance of hallucination. That is what we are working towards. And I’ll finish with because there are all esteemed panelists, I think I should finish with a quote. Arthur C. Clarke said technology, and I’m paraphrasing, technology done right is like magic. And if we bring that magic of technology plus AI to all kids in India, I think we’ve done our job. That’s what we are doing.

Speaker 1

Thank you, Aditi. I think we have a few minutes more and we can have just, you know, a quick round intervention. Just on the issue when we are, you know, when we just try to reimagine institution. What are the two things that we want to see in the future of higher education? And, sir, if I may ask, what do you want to see in the future of higher education? What do you want to be?

Professor KK Aggarwal

Finally a girl raised her hand. She said, okay. At least somebody. She said, yes, come on. We’ll work it together. She said, sir, everything is fine. But firstly tell us what is a tile. See in that African area the tiles were never used. They were used for round rooms with round floors and square tiles or rectangular tiles were not in the dictionary. And on that basis we declare all that class failed in mathematics. That is what we are doing today with the help of simple test. So we have to find out what is the ground level situation and then go ahead on that to test the ingenuity of that. Lastly, we have not to teach the subjects.

We have to teach the students. And therefore for each student what can we do? Again I say AI is an opportunity, great opportunity. We are talking about reimagining, imagining hierarchical education in this summit and my request with all the persuasion is, let the youth assert themselves that we need these subjects to be taught for our degree. And technology enables us to do that. We will have to do that. That’s my call on this.

Speaker 1

Thank you, sir. Suresh sir, in the same manner, when you reimagine institutions and you are heading up, you know, you are part of a global body, what kind of feature and what kind of, you know, I will say two or three things you want to see in the future, you know, futuristic educational institution.

Suresh Sir

Thank you very much. Quickly, three points I would like to say. First, that if you look back 10 years back when social media was in India, there was a talk that whether we want to be a download nation or we want to an upload nation. So there was a lot of emphasis on creating content and uploading on the internet and the media so that creativity flourished. now the conversation has moved whether we want to be again a consumption nation or we want to be producing nation, we want to be creative nations now this time the opportunity is phenomenal so we need to have a system where people create not consume, that’s the fundamental shift we need now the second thing I will say that university degrees masters, PhDs undergrad for the job we have the qualification for the job, in some of the countries only high school is good enough for getting the jobs in the government in the private sector high school diploma do we want only the students to be studying getting marks, getting distinction or do we want the students to be the problem solving young society so I think we have to shift from a you know degree awarding institutions to a problem solving institution.

India has millions and trillions of problems in each and every corner. You pick up one problem, solve it. You get your degree and go. You don’t need to pass all the examination. So that’s the fundamental shift India needs. If we want to go back to what I said in the beginning, that we want to be a nation where skill, capability drives the economy, not the other way around. So that’s the second. The third one you see, the 12th education system, the higher education system, the primary education system works in silos. We have to find and technology allow it to do it to interconnect the entire systems. And in the U .S., the higher education and the high school systems are very well connected in the part of ecosystem.

The moment we do that, we will have a thriving higher education, thriving education system. Thank you. pushing India into a very high growth trajectory and also to realize the dream which I talk about, a number one nation, not by 2050, 2070, but very soon. Thank you.

Speaker 1

Thank you, sir. Pankaj sir, as a chairperson of NCT, when you reimagine a teacher education institution or think about how a teacher education institution will be in the future, what are the two or three features that come to mind that you think should be the future of a teacher education center?

Pankaj Sir

Yes, as a regulator for teacher education, now Vixit Bharat Adhishthan is coming where it has been proposed to go with AI -oriented regulator. That regulator is not supposed to have a lot of human working for it. But 70 to 80 percent assessment will be done through AI. So, it is a very good thing. Thank you. Thank you. Thank you. So AI is going to play an important role, not only as a regulator, but also as a norms and standard developer for the nation, for academic programs also and for teachers also. I think the responsibility to promote research ethics among young people is very, very critical at the moment. Somebody is writing a letter to his wife and asking AI to give me a letter.

So this is ridiculous. It cannot give you emotion into that, personalized flavor to that. So research ethics, when you are doing any research for any class level, then we need to think of assessment devices, evaluation and assessment, which is lacking behind. We are developing content through AI, but we are not doing assessment through AI. This year, CVSC is trying to. Assess class 12 answers script through technology, but those would be only scanned documents. will check by teachers from their own remote place. But that is the beginning of bringing technology into assessment. And my last point would be Indian knowledge, Indian languages. We must start working very, very hard on this because if we actually want to pass on Indian tradition to the next generation, AI can become an important tool for that.

If we take AI out of Western knowledge, if we promote AI in Indian knowledge, Indian context, Indian languages, then we will really solve the next generation. And as the Prime Minister said, we have two AIs, as Pride India and Artificial Intelligence. So we must take both of them to optimum use. Thank you.

Speaker 1

Thank you, sir. Patil sir, from the ministry perspective, how you visualize future universities, and what kind of change you want to bring higher education institutions? which we want to build for the future.

Patil Sir

Again, same thing that Sir has told that it should be integrated. School and higher education, I would like to say that few universities have agreed to reach out to 100 schools. In Pune, there is one university called COEP. So they are telling that every day one school will come, visit, see their libraries, see their laboratories, meet their teachers. The teachers will go to the schools, they will interact. Because many of them are not knowing what is the present school. And what I was in the school and today’s school, there is huge change. Really huge change is there. So that has to be seen and it should be integrated. One more point that NEP says there is innate talent among the students.

So students should understand that and work on it, on your skills and meaningfully contribute to the economy which is very, very important. So once 140 crore population of India started contributing to the economy means above the income tax level, I am telling that the minimum 5 lakhs or 6 lakhs. It is going to be huge change here. Third point is brick mortar schools are going, universities are going. That is already we are seeing this huge change. But same time, teachers cannot be removed actually. The teachers, mentors, facilitators has to be there. And even we requested, even Intel we had last time meeting also with the companies to be mentor actually. You should also tell kids enough is enough.

One hour up you are playing with the games or you are using these things. So stop it there, which is really required. So ethical use is very, very important. Yes, we need to create a platform where all of the people can come. That is what EI, COE in education happening with Madras IIT where schools and higher educations are coming together, higher institutions are coming together, private players are also coming together. I recently seen one startup in IIT Delhi. where they don’t like this hotel rooms and all that. So he not want any hotel rooms at all. Like that, these startup don’t have any classrooms, they don’t have any infrastructure at all. But they teach in medical education actually with this permission from the regulator, paramedician basically are working it.

Youngsters are here, lot of youngsters are there, friends. Their annual turnover is 200 crore in just last two years. They are telling another one year will reach 400 crore. So I think there is huge opportunity for all of us. We should work on it. Thank you so much.

Speaker 1

Thank you, sir. Aditi, your comment on future of institution.

Speaker 3

I think everybody has done a great job of articulating that. I think everybody has done a great job.

Speaker 1

Thank you everyone for joining us and thank you for our eminent panel to put light on reimagining the institutions. And I think that what we are thinking about how the future institutions will be, when we start thinking about it, it will start to grow. And thank you everyone. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (11)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“About 50 % of surveyed private‑school students reported using generative‑AI tools (e.g., ChatGPT, Gemini) multiple times a week, a frequency slightly lower than among college students but still relatively high.”

The knowledge base reports that a Delhi survey of school students showed AI adoption at nearly 50 %, and that this level was slightly lower than the usage found among college students in the same city [S8].

Confirmedmedium

“Structured tasks such as calculations—especially in science streams—showed low accuracy and limited AI support.”

A related source notes that accuracy for logical or numerical subjects is reported to be relatively lower, confirming concerns about AI performance on calculation-heavy tasks [S82].

Confirmedhigh

“Students frequently encountered “hallucinations” (incorrect or fabricated answers) and expressed doubts about AI reliability for logical or numerical subjects.”

The knowledge base discusses AI hallucinations and limits of trust, and also highlights lower reported accuracy for logical/numerical subjects, corroborating the students’ doubts [S80] and [S82].

Confirmedmedium

“Prof KK Aggarwal, President of South Asian University, was a former Vice‑Chancellor who developed Indraprastha University during the IT wave.”

The source identifies Prof KK Aggarwal as the former Vice-Chancellor who developed Indraprastha University during the earlier IT movement [S1].

Confirmedhigh

“ChatGPT reached roughly 5 crore users in about 40 days, illustrating rapid diffusion.”

Data in the knowledge base shows that ChatGPT achieved 5 crore users within 40 days, confirming the rapid adoption claim [S3].

Additional Contextmedium

“AI adoption is occurring even faster than previous technology waves.”

The same source that documents ChatGPT’s 5 crore users in 40 days also compares this speed to earlier technologies (telephone, radio), providing context for the statement that AI adoption is unusually rapid [S3].

External Sources (85)
S1
AI 2.0 The Future of Learning in India — – Speaker 2- Patil Sir- Suresh Sir
S2
AI 2.0 Reimagining Indian education system — Thank you Pranavji for the presentation. Today as a panelist now we have Professor KK Agarwal sir, President South Asian…
S3
AI 2.0 Reimagining Indian education system — Thank you Pranavji for the presentation. Today as a panelist now we have Professor KK Agarwal sir, President South Asian…
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
AI 2.0 The Future of Learning in India — -Pankaj Sir: Chairperson of National Council of Teacher Education (NCTE), former head and dean at University of Delhi, e…
S8
AI 2.0 The Future of Learning in India — thank you sir thank you so much for giving me the opportunity I would like to ask few of the I’m seeing a lot of student…
S9
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S10
S12
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned.
S13
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S14
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S15
AI 2.0 The Future of Learning in India — -Professor KK Aggarwal: President of South Asian University, former Vice-Chancellor who developed Indraprastha Universit…
S16
AI 2.0 Reimagining Indian education system — -Professor K. K. Aggarwal- President of South Asian University, former developer of Indraprastha University, expertise i…
S17
AI 2.0 Reimagining Indian education system — Thank you Pranavji for the presentation. Today as a panelist now we have Professor KK Agarwal sir, President South Asian…
S18
AI 2.0 The Future of Learning in India — – Patil Sir- Pankaj Sir- Professor KK Aggarwal – Patil Sir- Pankaj Sir- Speaker 1 – Patil Sir- Suresh Sir- Speaker 1 …
S19
https://app.faicon.ai/ai-impact-summit-2026/ai-20-reimagining-indian-education-system — Thank you Pranavji for the presentation. Today as a panelist now we have Professor KK Agarwal sir, President South Asian…
S20
AI 2.0 The Future of Learning in India — – Pranav Kothari- Patil Sir – Pranav Kothari- Professor KK Aggarwal – Pranav Kothari- Speaker 2
S21
AI 2.0 The Future of Learning in India — Speakers:Pranav Kothari, Pankaj Sir, Professor KK Aggarwal Speakers:Pranav Kothari, Patil Sir Speakers:Pranav Kothari,…
S22
Closing Ceremony — This argument positions artificial intelligence as a transformative force rather than merely a technological tool. It su…
S23
Global AI Governance: Reimagining IGF’s Role &amp; Impact — Mario Nobile: Thank you. I agree with Ivana Bartoletti, and I’ll try to answer also to friends from Nigeria. I think tec…
S24
Education meets AI — Collaborating with Stanford Research to incorporate learning science into adaptive learning systems for students He str…
S25
Artificial intelligence (AI) – UN Security Council — The global focus on Artificial Intelligence (AI) capacity-building efforts has been a significant topic of discussion am…
S26
WSIS Action Lines C4 and C7:E-employment: Emerging technologies in the world of work: Addressing challenges through digital skills — The discussion centred on the urgent need to address rapidly evolving skill requirements in an era of technological tran…
S27
HIGH LEVEL LEADERS SESSION I — Need for discussion on available technologies and their latest advancements
S28
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Cosmas Luckyson Zavazava This comment set the foundational tone for the entire discussion, establishing that the real c…
S29
The future of work: preparing for automation and the gig economy — Concerns about the future of work also come from ongoing technological advancements in automation and AI. Some worry tha…
S30
AI Transformation in Practice_ Insights from India’s Consulting Leaders — The audience member from a GCC background highlights the difficulty of taking graduates with current educational backgro…
S31
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — Development | Human rights | Online education UNESCO is providing policy guidance on AI in education, focusing on frame…
S32
AI (and) education: Convergences between Chinese and European pedagogical practices — This comment was insightful because it challenged one of the most fundamental structural assumptions of higher education…
S33
State of play of major global AI Governance processes — In accordance with this, extensive research and algorithmic advancements have been integrated into public policy-making …
S34
DC-IoT Progressing Global Good Practice for the Internet of Things | IGF 2023 — Maarten Botterman:Yes, thank you for that, Wout. What we see is the rapid developments make it more and more difficult a…
S35
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Audience:Thank you so much, Dr. Ali Mahmood. I’m from Pakistan. I’m heading a provincial government entity that is invol…
S36
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — in the world in terms of policy and regulation. When Vision 2030 was launched by His Royal Highness the Crown Prince, we…
S37
The Gig Economy: Positioning Higher Education at the Center of the Future of Work (USAID Higher Education Learning Network) — Higher education has the potential to play a central role in shaping the future of work. It can address the crisis of un…
S38
AI 2.0 The Future of Learning in India — India has millions and trillions of problems in each and every corner. You pick up one problem, solve it. You get your d…
S39
Future-Ready Education: Enhancing Accessibility &amp; Building | IGF 2023 — By doing so, an inclusive, equitable, and technologically proficient education system can be fostered, preparing student…
S40
AI 2.0 The Future of Learning in India — Traditional higher education focused on academic credentials should transform into practical problem-solving education. …
S41
YouthLead: Inclusive digital future for all — Yurii Romashko: Today, there are thousands of young leaders from all of the world here. Not everybody was able to join t…
S42
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — An important aspect in addressing language barriers and promoting inclusivity is empowering communities to have control …
S43
Digital divides &amp; Inclusion — Collaboration, education, and addressing cultural barriers are all crucial components in enabling this change and ensuri…
S44
Comprehensive Discussion Report: AI’s Existential Challenge to Human Identity and Society — The discussion’s foundation rested on Harari’s crucial distinction between AI as an agent rather than a mere tool. He ar…
S45
Fireside Chat The Future of AI & STEM Education in India — Arguments:AI should be seen as a tool and opportunity to scale up and advance certain aspects of education, but it canno…
S46
Enhancing rather than replacing humanity with AI — AI democratises expertise that was previously limited by resources. People in underserved areas have access to sophistic…
S47
AI, smart cities, and the surveillance trade-off — Barcelona offers a contrasting approach that centreshuman agency in AI deployment. Under the leadership of Chief Technol…
S48
AI cheating scandal at University sparks concern — Hannah, a university student,admits to using AIto complete an essay when overwhelmed by deadlines and personal illness. …
S49
NSPRA warns AI must complement, not replace, human voices in education — A new report from the National School Public Relations Association (NSPRA) and ThoughtExchange highlights thegrowing rol…
S50
Musk’s Grok AI struggles with news accuracy — Grok, Elon Musk’s AI model available on the X platform, encountered significant issues in accuracyfollowing the attempte…
S51
How AI is reshaping US intelligence operations — TheUSintelligence community is fully embracing generative AI, marking a significant shift towards transparency in itsado…
S52
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S53
Artificial intelligence (AI) – UN Security Council — The discussions across various sessions highlighted several risks associated with the over-reliance on AI-powered conten…
S54
Driving Social Good with AI_ Evaluation and Open Source at Scale — Summary:While all speakers acknowledge the need for human involvement, they differ on the extent to which automation can…
S55
AI Meets Agriculture Building Food Security and Climate Resilien — Disagreement level:Low to moderate disagreement level with significant implications for AI governance in agriculture. Th…
S56
AI (and) education: Convergences between Chinese and European pedagogical practices — 1. **Universities and teachers remain essential** but must transform from knowledge transmitters to coaches and facilita…
S57
The National Education Association approves AI policy to guide educators — The US National Education Association (NEA) Representative Assembly (RA) delegates haveapprovedthe NEA’s first policy st…
S58
What policy levers can bridge the AI divide? — ## Forward-Looking Perspectives ## Infrastructure as Foundation ## Key Challenges and Opportunities ## Regulatory App…
S59
What is it about AI that we need to regulate? — Based on discussions across multiple IGF 2025 sessions, several fundamental assumptions about digital inclusion need cha…
S60
Open Forum #30 Harnessing GenAI to transform Education for All — Mohamed Shareef argues that generative AI is creating a new front in the digital divide, as developed countries invest h…
S61
WS #110 AI Innovation Responsible Development Ethical Imperatives — Gong addresses the need for inclusive development policies that ensure technology access for developing nations and prev…
S62
AI 2.0 The Future of Learning in India — This argument presents findings from a survey conducted in Delhi showing significant AI adoption among school students. …
S63
AI 2.0 Reimagining Indian education system — Evidence:Specific numbers: 15 lakh total schools in India, only 4 lakh have computers, ICT labs and tablets; 25 crore ch…
S64
AI 2.0 Reimagining Indian education system — And we’ll show you some of the really cool use cases and demos where voice -to -voice gets translated on the device. So …
S65
The future of work: preparing for automation and the gig economy — Concerns about the future of work also come from ongoing technological advancements in automation and AI. Some worry tha…
S66
AI 2.0 The Future of Learning in India — Now, we have just launched going to release one more report, usage of AI in school education. In next month, we are goin…
S67
[Briefing #51] Internet governance forecast for 2019 — A clearer understanding of AI can help policymakers and businesses take action more quickly on the future of work strate…
S68
Town Hall: How to Trust Technology — Additionally, Thompson raises concerns about potential job losses, particularly in the digital space, due to emerging te…
S69
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — And then there is the resourcing, the possible divide in education. There could be the highly resourced private schools …
S70
Human Rights-Centered Global Governance of Quantum Technologies: Implications for AI, Digital Rights, and the Digital Divide — This comment identifies a fundamental gap in current governance approaches and proposes concrete structural reforms. It …
S71
AI (and) education: Convergences between Chinese and European pedagogical practices — This comment was insightful because it challenged one of the most fundamental structural assumptions of higher education…
S72
Fireside Chat The Future of AI & STEM Education in India — Welcome to the panel, sir. Let me now invite Dr. Raj Kumar, Founding Vice -Chancellor at O .P. Jindal University. Dr. Ra…
S73
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S74
AI in education: Leveraging technology for human potential — ## Institutional Transformation and Partnerships ## Speaker Background and Personal Mission ## Vision for the Future …
S75
Opening remarks — Hartmut Glaser:of Science and Technology. was focused on artificial intelligence. President Lula da Silva asked us to di…
S76
AI Governance Dialogue: Steering the future of AI — ## Speaker and Context Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance w…
S77
The cognitive cost of AI: Balancing assistance and awareness — The rapid integration of AI tools like ChatGPT into daily life has transformed how we write, think, and communicate. AI …
S78
Anthropic report shows AI is reshaping work instead of replacing jobs — A new report by Anthropicsuggestsfears that AI will replace jobs remain overstated, with current use showing AI supporti…
S79
Study finds AI summaries can flatten understanding compared with reading sources — AI summaries can speed learning, but an extensivestudyfinds they often blunt depth and recall. More than 10,000 particip…
S80
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S81
How nonprofits are using AI-based innovations to scale their impact — It’s, I think it’s somewhere between the pilot and the rollout. So we, around 15 teachers I think have had 57 or 75, 57 …
S82
https://dig.watch/event/india-ai-impact-summit-2026/ai-2-0-the-future-of-learning-in-india — Then secondly, as I mentioned, when it comes to accuracy for logical or numerical subjects, there is relatively lower re…
S83
AI and the future of work: Global forum highlights risks, promise, and urgent choices — At the20th Internet Governance Forum held in Lillestrøm, Norway, global leaders, industry experts, and creatives gathere…
S84
Bjorn Ulvaeus says AI is ‘an extension of your mind’ — ABBA legend Bjorn Ulvaeus is working on anew musical with the help of AI, describing the technology as ‘an extension of …
S85
High-Level Track Facilitators Summary and Certificates — This anecdote powerfully illustrates a critical gap between technological advancement and educational preparedness. It t…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
P
Pranav Kothari
1 argument160 words per minute1084 words404 seconds
Argument 1
Current state of AI adoption in school education (survey findings) – High usage among private‑school students (≈50%) (Pranav Kothari) – Primary purposes: information search and writing assistance; limited for calculations and structured problem‑solving (Pranav Kothari) – Students perceive AI as helpful for exam preparation, yet report accuracy problems and hallucinations (Pranav Kothari) – AI is viewed as a supplementary tool, not a replacement for teachers (Pranav Kothari)
EXPLANATION
The CPRG survey of Delhi private‑school students found that roughly half of them use generative AI tools several times a week, mainly for looking up academic information and getting writing help. Students rate AI as useful for preparing for school and entrance exams, but they also report frequent inaccuracies and hallucinations, especially in logical or numerical tasks. Across the board, respondents consider AI a complementary aid rather than a substitute for traditional teacher‑led instruction.
EVIDENCE
The study reports that “nearly 50 % of students, and these are … private schools in Delhi, … use AI based tools” and that usage includes generative platforms such as ChatGPT and Gemini [24-27]. It notes that AI is primarily used for “searching for new academic information while studying or writing assistance” and that science students show limited use for calculations because of low accuracy [32]. Perceived helpfulness for both school and entrance exams is high, yet students experience “AI hallucination” and lower accuracy for logical or numerical subjects, indicating reliability concerns [39-42][45-48]. Finally, the respondents expressed an “overwhelming support for traditional human interaction based learning” and view AI as a supplementary tool rather than a replacement [56-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Delhi private-school survey reports ~50% student usage, mainly for information search and writing help, with limited use for calculations and noted hallucinations, and students see AI as a supplement rather than a replacement [S8], [S2], [S3].
MAJOR DISCUSSION POINT
AI adoption in schools
P
Professor KK Aggarwal
1 argument150 words per minute573 words227 seconds
Argument 1
AI as a transformative force compared with the earlier IT wave – AI adoption is faster than previous technologies; must augment creativity rather than shortcut thinking (Professor KK Aggarwal) – AI represents a 360‑degree paradigm shift; institutions that do not adapt will become obsolete (Speaker 2) – AI can break language barriers, enabling inclusive communication across diverse regions (Speaker 2)
EXPLANATION
Professor Aggarwal argues that AI is being adopted by younger generations at a speed that surpasses the earlier IT wave and should be used to enhance creativity rather than replace thinking. He and other panelists stress that AI constitutes a 360‑degree shift that will render institutions that fail to adapt obsolete. Additionally, AI’s multilingual capabilities can dissolve language barriers, allowing communication across India’s diverse linguistic landscape.
EVIDENCE
Aggarwal observes that “AI is being adopted by the youngsters even faster” than the IT wave and cautions that AI should “supplement our creativity” and not become a shortcut that diminishes thinking powers [78-84]. Speaker 2 describes AI as a “360 degree shift” and warns that any organization or institution that does not engage with this new reality “will be fossilized” [97-101]. He further notes that AI can “dismantle the barrier” of language, giving an example of an AI lab that translates Bhojpuri to multiple languages, thereby enabling inclusive communication [136-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists note AI is being adopted faster than the earlier IT wave and should supplement, not replace, creativity [S3]; AI is framed as a transformative force across society and the economy [S22]; multilingual AI translation capabilities are highlighted as breaking language barriers [S3].
MAJOR DISCUSSION POINT
AI as a transformative force
P
Patil Sir
1 argument151 words per minute2136 words847 seconds
Argument 1
Challenges of AI integration in Indian education – Severe digital divide: urban schools have ICT labs, many rural schools lack basic infrastructure (Patil Sir) – Bias, hallucinations, and low accuracy demand human supervision and ethical safeguards (Pankaj Sir) – Large proportion of teachers lack AI literacy; need targeted up‑skilling (Patil Sir) – Treating AI as a human entity creates mental stress; AI must be used as a tool, not a substitute (Patil Sir)
EXPLANATION
Patil Sir highlights the stark digital divide in India, where a minority of schools possess ICT infrastructure while many rural institutions lack basic connectivity. He points out that AI systems suffer from bias, hallucinations, and accuracy issues, requiring human oversight and ethical guidelines. Moreover, most teachers are not AI‑literate, necessitating focused up‑skilling, and he warns against anthropomorphising AI, which can cause mental stress among learners.
EVIDENCE
Patil cites national statistics: “India has 749 crore mobile users, 120 crore internet users, and only about 4-5 crore schools have computers or ICT labs” indicating a massive infrastructure gap, especially between urban and rural schools [214-222]. He also notes that only a fraction of the 1 crore teachers are AI-savvy, emphasizing the need for capacity building [221-222]. Regarding AI reliability, Pankaj Sir mentions “bias, hallucinations” and the necessity for “AI supervision” to ensure ethical use [205-208][180-183]. Patil stresses that AI must be treated as a machine, not a human, to avoid mental stress, and cites the introduction of an AI curriculum in third grade to teach children about AI’s nature and risks [234-239].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports highlight stark infrastructure gaps between urban and rural schools and uneven technology access across regions [S3]; the digital divide and need for capacity building for teachers are emphasized in the discussion of AI integration challenges [S2].
MAJOR DISCUSSION POINT
AI integration challenges
P
Pankaj Sir
1 argument132 words per minute1189 words536 seconds
Argument 1
Reimagining curriculum, assessment, and teacher roles – Curriculum revisions can be driven by technology dashboards, enabling rapid, low‑cost updates (Pankaj Sir) – Teachers should evolve into mentors, learning designers, and AI supervisors rather than content deliverers (Pankaj Sir) – AI can support adaptive learning but cannot fully replace human interaction in the classroom (Professor KK Aggarwal) – AI‑based assessment and standards development are emerging; future regulators may rely on AI for 70‑80 % of evaluations (Pankaj Sir)
EXPLANATION
Pankaj Sir describes how curriculum redesign can be automated through technology dashboards, allowing large‑scale updates without traditional meetings or expenses. He envisions teachers transitioning to mentor and AI‑supervisor roles, while acknowledging that AI cannot replace human interaction in learning. He also foresees regulators using AI for the majority of assessment and standards work, projecting 70‑80 % AI‑driven evaluation.
EVIDENCE
He explains that the curriculum revision exercise used a “techno-based” dashboard system, revising 72 programs at zero cost, and later applied the same model to the University of Delhi, completing revisions in two months without formal meetings or budgetary spend [169-173]. He further states that teachers will become “mentors, learning designers, and AI supervisors” rather than mere content deliverers, emphasizing AI as an assistant that requires supervision [176-180]. Aggarwal’s comment that students still prefer traditional human interaction reinforces the view that AI is supplementary, not a replacement [56-58]. Finally, Patil notes that future regulators will conduct “70 to 80 percent assessment will be done through AI” indicating a shift toward AI-driven evaluation [404-408].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A technology-enabled dashboard was used to overhaul 72 programmes without additional cost, demonstrating rapid curriculum revision [S1]; teachers are described as moving toward mentor and learning-designer roles, with AI as an assistant, in multiple sources [S2], [S8]; emerging AI-driven assessment is noted as a future trend in the broader AI-education discourse.
MAJOR DISCUSSION POINT
Curriculum and teacher role redesign
S
Speaker 2
1 argument162 words per minute1035 words382 seconds
Argument 1
Vision for the future of Indian higher education and economy – Building world‑class institutions is essential to unlock India’s potential GDP (≈$70‑$150 trillion) and global leadership (Speaker 2) – Shift from degree‑centric to problem‑solving, creation‑nation model; education should produce innovators, not just credential holders (Suresh Sir) – Integrate school and higher education ecosystems; foster industry‑university partnerships and AI labs to bridge gaps (Patil Sir) – Industry (e.g., Intel) can provide localized AI solutions, language translation, and skill‑based curricula to reach Tier‑2/3 and rural learners (Speaker 3)
EXPLANATION
Speaker 2 argues that India must develop world‑class higher‑education institutions to realize a GDP potential of $70‑$150 trillion, positioning the country as a global leader. Suresh Sir adds that the system should move from degree‑focused to problem‑solving, creativity‑driven education. Patil Sir stresses the need for integration between school and higher‑education ecosystems and industry partnerships, while Speaker 3 highlights how companies like Intel can deliver localized AI tools, multilingual translation, and skill‑based curricula to reach underserved regions.
EVIDENCE
Speaker 2 cites that the United States dominates globally due to its higher-education system and that Chinese universities now lead AI research, arguing that building world-class institutions is crucial for India’s projected $70-$150 trillion GDP potential [107-115]. Suresh Sir outlines three points: moving from a “download nation” to a “creation nation”, shifting from degree-centric to problem-solving education, and connecting primary, secondary, and higher-education systems [387-401]. Patil Sir describes integration initiatives such as universities visiting 100 schools, AI labs, and industry-university MOUs, emphasizing the need for coordinated ecosystems [426-444]. Speaker 3 (Intel) provides examples of AI-enabled language translation, offline AI PCs, and skill-based curricula like the Unnati program, demonstrating how industry can supply localized solutions for Tier-2/3 and rural learners [292-335].
MAJOR DISCUSSION POINT
Future higher‑education and economic vision
S
Suresh Sir
3 arguments154 words per minute395 words153 seconds
Argument 1
Transform education from a consumption‑focused to a creation‑focused model
EXPLANATION
Suresh Sir argues that India must move beyond being a digital consumer nation and develop systems that enable people to create content and innovate. He stresses that education should cultivate creativity rather than merely facilitating consumption of existing digital resources.
EVIDENCE
He explains that a decade ago the debate was whether India would be a download nation, but now the conversation has shifted to becoming a creation nation, emphasizing the need for systems that enable people to produce content rather than just consume it [387-389].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call to move from a download/consumption nation to a creative, production-oriented nation is articulated in the AI education reports, emphasizing the need for systems that enable creation rather than mere consumption [S8], [S1].
MAJOR DISCUSSION POINT
Reorienting education towards creativity and production
Argument 2
Shift from degree‑centric to problem‑solving institutions
EXPLANATION
He proposes that academic credentials should be tied to solving real‑world problems, suggesting a model where students tackle a specific challenge, earn a degree, and move on, instead of focusing solely on passing examinations.
EVIDENCE
Suresh Sir states that the system should shift from degree-awarding institutions to problem-solving institutions, where a student picks a problem, solves it, receives a degree, and proceeds [389-394].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation to reorient institutions toward problem-solving outcomes, where degrees are awarded for real-world challenge completion, is echoed in the AI-driven education discourse [S8], [S1].
MAJOR DISCUSSION POINT
Redefining the purpose of higher education
DISAGREED WITH
Speaker 2
Argument 3
Integrate primary, secondary and higher education through technology
EXPLANATION
He highlights that the current education system operates in silos and calls for technology‑enabled interconnection of all levels, similar to the integrated model observed in the United States, to create a thriving ecosystem.
EVIDENCE
He notes that the 12th education system, higher education, and primary education work in silos and that technology should be used to interconnect the entire system, enabling a more cohesive educational landscape [397-400].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for a technology-enabled, seamless ecosystem linking school and higher-education levels is highlighted as a priority for reform in the AI education literature [S8], [S2].
MAJOR DISCUSSION POINT
Creating a unified, technology‑driven education ecosystem
S
Speaker 1
3 arguments68 words per minute1284 words1122 seconds
Argument 1
Joint policy research initiatives are essential for guiding AI integration in education
EXPLANATION
Speaker 1 introduces a collaborative effort between CPRG and the Future of Society, underscoring the importance of think‑tanks and stakeholder consultations to shape policies for emerging technologies in education.
EVIDENCE
He describes CPRG as a policy think tank researching governance issues, mentions the Emerging Technology Centre, and explains that the Future of Society project produces reports and conducts extensive stakeholder consultations on AI in education [2-7].
MAJOR DISCUSSION POINT
Role of collaborative policy research in AI‑enabled education
Argument 2
Urgent need to address skill obsolescence due to AI and emerging technologies
EXPLANATION
He points out widespread anxiety that current skills may become irrelevant within the next five to ten years because of rapid AI adoption, calling for proactive identification of future skills and jobs.
EVIDENCE
He notes the fear that acquired skills may not survive the next 5-10 years and that tools could render human skills irrelevant, emphasizing the need to understand transformation, future skills, and jobs [10-13].
MAJOR DISCUSSION POINT
Anticipating and preparing for AI‑driven skill shifts
Argument 3
Post‑COVID rapid changes demand institutional assessment and adaptation to AI challenges
EXPLANATION
Speaker 1 asks panelists how institutions should analyze the fast‑changing educational landscape after COVID‑19 and develop strategies to meet AI‑related challenges, highlighting the need for agile institutional responses.
EVIDENCE
He references the post-COVID transformation of educational institutions and asks how to assess and suggest actions for institutions to address AI and emerging technology challenges [88-89].
MAJOR DISCUSSION POINT
Institutional agility in the post‑COVID AI era
S
Speaker 3
3 arguments168 words per minute1256 words447 seconds
Argument 1
Industry‑academia‑government collaboration is vital to scale AI‑enabled education solutions
EXPLANATION
Speaker 3 stresses that effective AI integration in education requires coordinated efforts among industry, government, and academia, citing Intel’s partnerships with startups, higher education, and K‑12 to develop impactful AI tools.
EVIDENCE
She explains that Intel works closely with higher ed and K-12, collaborates with startups to create solutions for students, and mentions programs such as Unnati, AI PCs, and localized AI applications that bridge gaps for Tier-2/3 and rural learners [292-335].
MAJOR DISCUSSION POINT
Collaborative ecosystem for AI in education
Argument 2
AI can dismantle language barriers through multilingual, offline translation, fostering inclusive learning
EXPLANATION
She highlights AI’s capability to translate regional languages like Bhojpuri into multiple languages on devices without internet connectivity, thereby enabling learners across linguistic diversity to access content.
EVIDENCE
She describes AI translating Bhojpuri to other languages, voice-to-voice translation on devices that operate without cloud connectivity, and localized content delivery that overcomes language obstacles [304-335].
MAJOR DISCUSSION POINT
AI‑driven multilingual accessibility
Argument 3
AI‑driven 24/7 tutoring offers safe, non‑judgmental support for introverted learners
EXPLANATION
She proposes AI bots that act as continuous tutors, providing personalized assistance in the learner’s language without judgment, addressing gaps when human teachers are unavailable and reducing reliance on potentially biased human interaction.
EVIDENCE
She narrates a scenario where a bot teaches a child in their language, creates a safe space, functions as a 24-7 tutor, and mentions that local device deployment can lower hallucination risks [350-356].
MAJOR DISCUSSION POINT
AI as a supportive, ethical tutoring tool
DISAGREED WITH
Patil Sir
Agreements
Agreement Points
AI is widely used in schools but is viewed as a supplementary tool; concerns about accuracy, hallucinations and the need for human oversight are common.
Speakers: Pranav Kothari, Professor KK Aggarwal, Pankaj Sir, Patil Sir
Current state of AI adoption in school education (survey findings) – High usage among private‑school students (~50%) … Primary purposes: information search and writing assistance; limited for calculations and structured problem‑solving (Pranav Kothari) AI is a transformative force compared with the earlier IT wave – AI adoption is faster than previous technologies; must augment creativity rather than shortcut thinking (Professor KK Aggarwal) Reimagining curriculum, assessment, and teacher roles – Teachers should evolve into mentors, learning designers, and AI supervisors rather than content deliverers (Pankaj Sir) Challenges of AI integration in Indian education – Treating AI as a human entity creates mental stress; AI must be used as a tool, not a substitute (Patil Sir)
The CPRG survey shows roughly half of Delhi private-school students use generative AI several times a week for information search and writing help, but report hallucinations and low accuracy for logical tasks; panelists repeatedly stress that AI should complement, not replace, teachers and must be supervised (Pranav [24-27][32][39-42][45-48][56-58]; Aggarwal [78-84]; Pankaj [176-180][190-194]; Patil [234-239]).
POLICY CONTEXT (KNOWLEDGE BASE)
Guidance from the National School Public Relations Association stresses that AI must complement, not replace, human voices in education, echoing broader policy recommendations to keep human oversight central [S49]; similar cautions appear in the NEA AI policy and industry commentaries that frame AI as a tool, not a substitute for teachers [S45][S46].
AI represents a rapid, paradigm‑shifting force that is being adopted faster than the earlier IT wave, and institutions that fail to adapt risk obsolescence.
Speakers: Professor KK Aggarwal, Speaker 2, Patil Sir
AI as a transformative force compared with the earlier IT wave – AI adoption is faster than previous technologies; must augment creativity rather than shortcut thinking (Professor KK Aggarwal) Vision for the future of Indian higher education and economy – AI is a 360‑degree paradigm shift; institutions that do not adapt will become fossilized (Speaker 2) AI can break language barriers, enabling inclusive communication across diverse regions (Speaker 2)
Aggarwal notes AI is being adopted by youngsters even faster than the IT wave and should supplement creativity; Speaker 2 describes AI as a 360-degree shift that will fossilise non-adapting organisations; Patil also highlights AI’s ability to dismantle language barriers, underscoring the urgency of adaptation (Aggarwal [78-84]; Speaker 2 [97-101]; Patil [136-138]).
POLICY CONTEXT (KNOWLEDGE BASE)
Global AI governance reports note the unprecedented speed of AI diffusion, outpacing earlier IT adoption cycles and prompting calls for proactive policy to avoid institutional lag [S33][S34].
Closing the digital divide and overcoming language barriers are essential for inclusive AI‑enabled education.
Speakers: Patil Sir, Speaker 2, Speaker 3
Challenges of AI integration in Indian education – Severe digital divide: urban schools have ICT labs, many rural schools lack basic infrastructure (Patil Sir) Vision for the future of Indian higher education and economy – Digital divide highlighted as fundamental shift (Speaker 2) Industry‑academia‑government collaboration is vital to scale AI‑enabled education solutions (Speaker 3) AI can dismantle language barriers through multilingual, offline translation, fostering inclusive learning (Speaker 3)
Patil points to the massive infrastructure gap (only ~4-5 % of schools have ICT labs) and the shortage of AI-literate teachers; Speaker 2 stresses that post-COVID shifts expose a fundamental digital divide; Intel’s representative describes multilingual AI tools that work offline to reach Tier-2/3 and rural learners (Patil [214-222][224-231][236-239]; Speaker 2 [94-98][101-102]; Speaker 3 [304-311][332-336]).
POLICY CONTEXT (KNOWLEDGE BASE)
IGF sessions on digital inclusion highlight multilingual internet access and the need to bridge connectivity gaps as preconditions for equitable AI-driven learning [S42][S43]; recent discussions on AI’s role in widening the digital divide reinforce this priority [S60].
Curriculum redesign and teacher roles should be driven by technology, turning teachers into mentors and AI supervisors while keeping human interaction central.
Speakers: Pankaj Sir, Professor KK Aggarwal, Patil Sir
Reimagining curriculum, assessment, and teacher roles – Curriculum revisions can be driven by technology dashboards, enabling rapid, low‑cost updates (Pankaj Sir) AI is a transformative force … must supplement creativity (Professor KK Aggarwal) AI cannot replace teachers; it should assist and be supervised (Patil Sir)
Pankaj describes a zero-cost, dashboard-based overhaul of 72 programmes and envisions teachers as mentors and AI supervisors; Aggarwal reiterates AI’s supplementary nature; Patil stresses that AI is an assistant requiring human supervision (Pankaj [169-173][176-180][190-194][196-200]; Aggarwal [56-58]; Patil [176-180][184-188]).
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs from the AI-education convergence study and the NEA’s AI policy call for teachers to shift from knowledge transmitters to coaches and AI facilitators, preserving human interaction while leveraging technology [S56][S57].
Future higher‑education should be world‑class, problem‑solving oriented, and tightly linked with industry and school ecosystems.
Speakers: Speaker 2, Suresh Sir, Patil Sir, Speaker 3
Vision for the future of Indian higher education and economy – Building world‑class institutions is essential to unlock India’s potential GDP … (Speaker 2) Transform education from a consumption‑focused to a creation‑focused model (Suresh Sir) Integrate school and higher education ecosystems; foster industry‑university partnerships (Patil Sir) Industry‑academia‑government collaboration is vital to scale AI‑enabled education solutions (Speaker 3)
Speaker 2 links world-class universities to a $70-$150 trillion GDP vision; Suresh calls for a shift from degree-centric to problem-solving education; Patil stresses integration of schools with universities and industry MOUs; Intel’s representative cites concrete industry-driven curricula and AI labs that bridge gaps (Speaker 2 [107-115][118-124][129-136]; Suresh [387-401]; Patil [426-444]; Speaker 3 [292-335]).
POLICY CONTEXT (KNOWLEDGE BASE)
USAID’s Higher Education Learning Network emphasizes aligning curricula with industry needs to boost competitiveness, while Indian AI-2.0 proposals advocate problem-solving degrees that address real-world challenges [S37][S40].
Ethical safeguards, supervision, and avoiding anthropomorphising AI are necessary to prevent bias, hallucinations, and mental stress.
Speakers: Patil Sir, Pankaj Sir, Professor KK Aggarwal, Speaker 3
Challenges of AI integration in Indian education – Bias, hallucinations, and low accuracy demand human supervision and ethical safeguards (Patil Sir) Bias, hallucinations … AI‑based assessment and standards development are emerging; future regulators may rely on AI for 70‑80 % of evaluations (Pankaj Sir) Only thing which one has to see is … AI should supplement our creativity, not give us a shortcut … (Professor KK Aggarwal) AI‑driven 24/7 tutoring offers safe, non‑judgmental support; offline devices reduce hallucination risk (Speaker 3)
Patil warns that AI must be treated as a machine to avoid mental stress and calls for supervision; Pankaj notes bias and hallucinations and the need for AI-supervised assessment; Aggarwal stresses AI should not shortcut creativity; Intel’s speaker highlights offline AI PCs that limit hallucinations and provide safe tutoring (Patil [234-239]; Pankaj [205-208][180-183]; Aggarwal [82-84]; Speaker 3 [340-356]).
POLICY CONTEXT (KNOWLEDGE BASE)
Harari’s distinction between AI as an agent versus a tool underpins ethical frameworks calling for safeguards against bias and over-reliance, echoed in security council discussions on AI risks and social-good AI reports stressing human oversight [S44][S52][S54].
Similar Viewpoints
Both emphasize that AI should complement, not replace, traditional teaching and that human oversight remains essential (Pranav [56-58]; Pankaj [176-180][190-194]).
Speakers: Pranav Kothari, Pankaj Sir
Current state of AI adoption in school education … AI is viewed as a supplementary tool, not a replacement for teachers (Pranav Kothari) Reimagining curriculum, assessment, and teacher roles … AI cannot replace teachers; it should assist (Pankaj Sir)
Both portray AI as a rapid, paradigm‑shifting technology that will render non‑adapting institutions obsolete (Aggarwal [78-84]; Speaker 2 [97-101]).
Speakers: Professor KK Aggarwal, Speaker 2
AI as a transformative force compared with the earlier IT wave … faster adoption (Professor KK Aggarwal) Vision for the future of Indian higher education … AI is a 360‑degree paradigm shift; institutions that do not adapt will be fossilized (Speaker 2)
Both stress that multilingual AI tools are key to bridging linguistic and geographic divides in education (Patil [236-239]; Speaker 3 [304-311][332-336]).
Speakers: Patil Sir, Speaker 3
Challenges of AI integration … digital divide, language barriers (Patil Sir) Industry‑academia‑government collaboration … AI can dismantle language barriers through multilingual, offline translation (Speaker 3)
Both advocate moving from a consumption‑oriented education system to one that prioritises creation, problem‑solving and real‑world impact (Suresh [387-389]; Speaker 2 [118-124][387-389]).
Speakers: Suresh Sir, Speaker 2
Transform education from a consumption‑focused to a creation‑focused model (Suresh Sir) Vision for the future of Indian higher education … shift from degree‑centric to problem‑solving, creation‑nation (Speaker 2)
Both highlight technology‑enabled curriculum redesign and the role of teachers as AI supervisors rather than content deliverers (Pankaj [169-173][176-180]; Patil [176-180][184-188]).
Speakers: Patil Sir, Pankaj Sir
Reimagining curriculum … technology‑driven dashboard revisions (Pankaj Sir) AI cannot replace teachers; it should assist and be supervised (Patil Sir)
Unexpected Consensus
Both academic leaders and industry representatives agree on the importance of offline, device‑local AI solutions to reduce hallucinations and protect privacy.
Speakers: Professor KK Aggarwal, Pankaj Sir, Speaker 3
AI should supplement creativity and not become a shortcut (Professor KK Aggarwal) Bias, hallucinations … future regulators may rely heavily on AI, implying need for controlled deployment (Pankaj Sir) AI‑driven 24/7 tutoring … offline devices reduce hallucination risk and ensure safe tutoring (Speaker 3)
While academic speakers emphasized supervision and ethical use, Intel’s speaker unexpectedly aligned by promoting offline AI PCs that keep data on-device, thereby addressing the same concerns about hallucinations and privacy (Aggarwal [82-84]; Pankaj [205-208]; Speaker 3 [340-356]).
POLICY CONTEXT (KNOWLEDGE BASE)
Policy deliberations on bridging the AI divide identify offline, edge-computing deployments as a strategy to limit data exposure and improve model reliability, aligning with privacy-focused governance recommendations [S58].
Overall Assessment

There is strong, cross‑sectoral consensus that AI is already permeating Indian schools, that it should be used as a complementary tool with robust human supervision, and that rapid, technology‑driven curriculum redesign and teacher‑role transformation are needed. All participants stress the urgency of closing the digital and language divide, embedding ethical safeguards, and re‑imagining higher education toward problem‑solving, world‑class institutions with industry partnerships.

High consensus – the majority of speakers, from academia, government, and industry, echo the same core ideas, suggesting a solid foundation for coordinated policy and implementation efforts.

Differences
Different Viewpoints
Extent of AI‑driven assessment and regulatory automation
Speakers: Pankaj Sir, Patil Sir
AI-based assessment and standards development are emerging; future regulators may rely on AI for 70‑80 % of evaluations Bias, hallucinations, and low accuracy demand human supervision and ethical safeguards Treating AI as a tool not a substitute creates mental stress
Pankaj Sir envisions a future where regulators use AI for the majority (70-80 %) of assessment tasks, treating AI as an assistant that can evaluate at scale [404-408]. Patil Sir counters that AI systems suffer from bias, hallucinations and low accuracy, requiring human oversight and ethical safeguards, and stresses that AI must remain a tool rather than a decision-maker [180-183][234-239]. This creates a clear disagreement on the degree of autonomy to grant AI in assessment processes.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates in AI governance forums highlight tension between scaling assessment automation and maintaining human judgment, with reports urging balanced oversight to avoid over-automation pitfalls [S52][S54].
How to address the digital divide for AI rollout in education
Speakers: Patil Sir, Professor KK Aggarwal
Severe digital divide: urban schools have ICT labs, many rural schools lack basic infrastructure AI can break language barriers, enabling inclusive communication across diverse regions
Patil Sir highlights that only a small fraction of Indian schools possess computers or ICT labs, pointing to a massive infrastructure gap that must be solved before AI can be widely deployed [214-222]. Professor KK Aggarwal argues that AI’s multilingual capabilities can dismantle language barriers and act as a catalyst for inclusive education, implying that technology itself can bridge gaps without first solving the infrastructure deficit [136-138][97-101]. The two positions differ on whether infrastructure must precede AI adoption or whether AI can leap-frog the divide.
POLICY CONTEXT (KNOWLEDGE BASE)
IGF workshops and policy papers stress that infrastructure, multilingual support, and inclusive development policies are essential to prevent AI from deepening existing educational inequities [S60][S61][S42][S43].
Primary goal of higher‑education reform – economic competitiveness vs creativity/problem‑solving
Speakers: Speaker 2, Suresh Sir
Building world‑class institutions is essential to unlock India’s potential GDP ($70‑$150 trillion) and global leadership Shift from degree‑centric to problem‑solving institutions
Speaker 2 frames higher-education transformation as a lever to achieve massive GDP growth and global dominance, emphasizing the need for world-class research institutions [89-115]. Suresh Sir stresses that reform should prioritize fostering creativity and problem-solving, moving away from degree-centric models toward institutions that award credentials for solving real-world challenges [387-401]. Both seek change but diverge on the ultimate metric of success – economic scale versus creative capacity.
POLICY CONTEXT (KNOWLEDGE BASE)
Strategic documents from the Gig Economy initiative prioritize economic competitiveness and workforce alignment, whereas Indian AI-2.0 proposals and UNESCO-style recommendations champion creativity and problem-solving as core outcomes [S37][S40].
Conceptualisation of AI as a tutor versus AI as a mere tool
Speakers: Patil Sir, Speaker 3
Treating AI as a tool not a substitute creates mental stress AI‑driven 24/7 tutoring offers safe, non‑judgmental support for introverted learners
Patil Sir warns that anthropomorphising AI creates mental stress for learners and insists AI should stay a machine, not a substitute for teachers [234-239]. Speaker 3 promotes AI-driven 24-hour tutoring that acts like a personal tutor, providing continuous, judgment-free assistance especially for introverted children [350-356]. This reflects a clash over the appropriate role and perception of AI in learning environments.
POLICY CONTEXT (KNOWLEDGE BASE)
Scholarly discussions differentiate AI as an autonomous tutor (agent) from AI as a supportive tool, with policy frameworks urging caution against anthropomorphising and over-reliance [S44][S45][S56].
Unexpected Differences
High‑level AI automation in assessment versus need for human oversight
Speakers: Pankaj Sir, Patil Sir
AI-based assessment and standards development are emerging; future regulators may rely on AI for 70‑80 % of evaluations Bias, hallucinations, and low accuracy demand human supervision and ethical safeguards Treating AI as a tool not a substitute creates mental stress
Most of the panel consistently described AI as a supplementary aid, yet Pankaj’s push for 70-80 % AI-driven assessment contrasts sharply with Patil’s insistence on human supervision and ethical safeguards, revealing an unexpected split on the permissible level of AI autonomy in high-stakes evaluation contexts [404-408][180-183][234-239].
POLICY CONTEXT (KNOWLEDGE BASE)
Security council analyses and social-good AI evaluations warn that excessive automation in assessment can erode fairness, recommending human oversight as a safeguard [S52][S54].
AI as a primary tutor versus AI strictly as a tool
Speakers: Patil Sir, Speaker 3
Treating AI as a tool not a substitute creates mental stress AI‑driven 24/7 tutoring offers safe, non‑judgmental support for introverted learners
While the majority of speakers warned against anthropomorphising AI, Speaker 3’s promotion of AI-driven 24-hour tutoring as a personal, judgment-free mentor was unexpected and conflicted with Patil’s stance that AI must remain a mere tool to avoid psychological harm [234-239][350-356].
POLICY CONTEXT (KNOWLEDGE BASE)
The same body of literature that debates tutor versus tool roles underscores the risk of treating AI as a primary educator, advocating policies that retain AI as an augmentative resource rather than a replacement [S44][S45][S56].
Overall Assessment

The panel shows consensus that AI must be part of India’s education future, but disagreements arise around the depth of AI autonomy (assessment vs human oversight), the sequencing of infrastructure versus technology deployment, the ultimate purpose of higher‑education reform (GDP growth vs creativity/problem‑solving), and the pedagogical framing of AI as a tool versus a tutor.

Moderate to high. The divergences touch on policy design, resource allocation, and ethical framing, implying that without reconciling these viewpoints, implementation may face friction between rapid AI deployment and safeguards needed for equity, quality, and learner wellbeing.

Partial Agreements
All four speakers agree that AI should be integrated into the education system, but they propose different pathways: Pranav provides data‑driven evidence of current usage and calls for policy support; Pankaj advocates rapid, technology‑enabled curriculum overhaul using dashboards; Patil stresses the need to first close the digital and skills divide before AI can be effective; Aggarwal focuses on ensuring AI augments creativity rather than replaces thinking. The shared goal of AI integration is clear, yet the routes to achieve it diverge [24-27][39-42][56-58][169-173][214-222][78-84].
Speakers: Pranav Kothari, Pankaj Sir, Patil Sir, Professor KK Aggarwal
Current state of AI adoption in school education (survey findings) – High usage among private‑school students (~50%) – Primary purposes: information search and writing assistance; limited for calculations and structured problem‑solving – Students perceive AI as helpful for exam preparation, yet report accuracy problems and hallucinations – AI is viewed as a supplementary tool, not a replacement for teachers Curriculum revisions can be driven by technology dashboards, enabling rapid, low‑cost updates Challenges of AI integration in Indian education – Severe digital divide: urban schools have ICT labs, many rural schools lack basic infrastructure – Bias, hallucinations, and low accuracy demand human supervision and ethical safeguards – Large proportion of teachers lack AI literacy; need targeted up‑skilling – Treating AI as a human entity creates mental stress; AI must be used as a tool, not a substitute AI as a transformative force compared with the earlier IT wave – AI adoption is faster than previous technologies; must augment creativity rather than shortcut thinking
All three agree that multi‑stakeholder collaboration is essential for AI‑driven education transformation. Speaker 3 highlights Intel’s partnerships with startups, higher‑ed and K‑12 institutions; Patil mentions MOUs with Google, Microsoft and AI labs in IITs; Speaker 2 calls for strategic partnerships to build world‑class institutions. While the consensus on collaboration exists, they differ on the primary actors and mechanisms—private‑sector driven programs (Speaker 3), government‑led MOUs and capacity building (Patil), and broad strategic alliances for economic competitiveness (Speaker 2) [292-335][226-236][89-115].
Speakers: Speaker 3, Patil Sir, Speaker 2
Industry‑academia‑government collaboration is vital to scale AI‑enabled education solutions Challenges of AI integration in Indian education – Severe digital divide … – Bias, hallucinations … – Large proportion of teachers lack AI literacy … – Treating AI as a tool … Vision for the future of Indian higher education and economy – Building world‑class institutions is essential to unlock India’s potential GDP ($70‑$150 trillion) and global leadership
Takeaways
Key takeaways
AI usage among private‑school students in Delhi is high (≈50%), primarily for information search and writing assistance, but less reliable for calculations and structured problem‑solving. Students view AI as helpful for exam preparation, yet report accuracy problems, hallucinations, and limited trust in AI‑generated answers. Across the panel, AI is seen as a faster, 360‑degree paradigm shift compared with the earlier IT wave; it can break language barriers and democratise access if deployed responsibly. The digital divide remains stark: urban schools often have ICT labs, while many rural schools lack basic infrastructure, limiting equitable AI adoption. Teachers are largely untrained in AI; the consensus is that AI should augment, not replace, teachers, who should evolve into mentors, learning designers, and AI supervisors. Curriculum and assessment can be rapidly updated using technology dashboards and AI‑driven tools, but human oversight is essential to address bias, hallucinations, and ethical concerns. Future Indian higher education must shift from degree‑centric models to problem‑solving, creation‑focused institutions that can compete globally and drive the nation’s economic potential. Industry partnerships (e.g., Intel, IITs, startups) are crucial for delivering localized AI solutions, language translation, and skill‑based curricula to Tier‑2/3 and rural learners.
Resolutions and action items
Launch of the AI‑in‑school‑education report (already done) and upcoming reports on AI in higher education and Future of Jobs. Integrate an AI curriculum starting from Grade 3 to teach AI concepts, benefits, and risks. Develop AI‑enabled teacher‑upskilling programmes (e.g., NPST, NMM) and AI supervision frameworks. Implement technology‑driven curriculum revision processes (dashboard‑based) for rapid, low‑cost updates across universities. Establish AI‑centric regulatory mechanisms (AI‑oriented regulator, 70‑80% AI‑based assessment) for teacher‑education standards. Create AI Centres of Excellence (COE) in education, with collaborations between IITs, industry partners, and government bodies. Deploy localized AI tools (on‑device translation, offline tutoring) to address language barriers and limited connectivity. Scale partnerships with schools (e.g., COEP’s outreach to 100 schools) to foster school‑higher‑education integration. Continue ethical guidelines development to prevent misuse of AI as a human substitute and to manage hallucinations and bias.
Unresolved issues
How to bridge the digital infrastructure gap for the estimated 4‑5 crore schools lacking computers/ICT labs. Effective strategies for large‑scale AI literacy upskilling of the ~1 crore teachers, especially in remote areas. Concrete standards and mechanisms for AI‑based assessment to ensure fairness, reliability, and mitigation of bias. Long‑term funding models for sustained AI integration in under‑resourced schools and universities. Specific policies to regulate AI‑generated content in education and prevent over‑reliance or mental stress among students. Details on how AI will be incorporated into accreditation, degree validation, and international benchmarking.
Suggested compromises
Position AI as a supplementary tool rather than a full replacement for teachers, maintaining human interaction for empathy and ethical guidance. Combine AI‑driven adaptive learning with teacher‑led mentorship, ensuring AI outputs are supervised and validated by educators. Adopt a multi‑semester “AI spine” approach, integrating AI throughout curricula while retaining traditional pedagogical elements. Use free generative AI models for broad access but acknowledge their limitations; supplement with purpose‑built, domain‑specific AI tools where needed. Balance rapid AI adoption with safeguards to protect creativity, ensuring AI augments rather than shortcuts critical thinking.
Thought Provoking Comments
AI should supplement our creativity, not give us a shortcut that reduces our thinking powers.
Highlights the nuanced risk of AI becoming a crutch, emphasizing the need to preserve critical thinking while leveraging technology.
Shifted the discussion from merely describing AI usage to a deeper debate on pedagogical philosophy, prompting subsequent speakers (e.g., Pankaj Sir) to elaborate on the teacher’s evolving role as a mentor rather than a content deliverer.
Speaker: Professor KK Aggarwal
We must re‑imagine India’s education system for 2050‑2100; AI is a 360° paradigm shift and the nation that dominates AI will dominate the world for the next century.
Places AI within a grand geopolitical and economic narrative, moving the conversation from school‑level observations to national strategic imperatives.
Created a macro‑level turning point, leading others (Patil, Suresh, Pankaj) to discuss infrastructure, digital divide, and the need for systemic reforms rather than incremental tweaks.
Speaker: Speaker 2 (Ramananji)
Curriculum revision can be done techno‑based, without human meetings or large budgets; AI should be an assistant, not a master, and teachers become mentors and learning designers.
Introduces a concrete, scalable model for integrating AI into curriculum design and assessment, while preserving the human element.
Prompted a deeper exploration of governance vs. leadership in education (Pankaj later) and reinforced the earlier point about AI as a supplement, influencing Patil’s remarks on AI‑spine and Vixit Bharat 2047.
Speaker: Pankaj Sir
The adoption curve for ChatGPT in India (40‑60 days to reach 5 crore users) shows a quantum jump compared with telephone (75 years) or radio (38 years).
Quantifies the speed of AI diffusion, underscoring the urgency of addressing infrastructure and equity challenges.
Served as a factual turning point that shifted the tone from abstract possibilities to concrete logistical concerns, leading to discussion of rural‑area gaps and the need for AI‑savvy teachers.
Speaker: Patil Sir
AI can break language barriers: a device that translates Bhojpuri to English, Tamil to 11 Indian languages, and can run locally without internet, providing a 24‑7 tutor.
Offers a tangible solution to a long‑standing problem (language of instruction), linking technology to inclusivity and reducing hallucination risk.
Redirected the conversation toward practical implementations, inspiring participants to mention local AI labs, AI curriculum in third grade, and the importance of content being available offline.
Speaker: Speaker 3 (Intel representative)
We need to move from a consumption nation to a creative, problem‑solving nation; universities should become problem‑solving institutions where solving a real societal issue earns a degree.
Proposes a radical redesign of the purpose of higher education, aligning academic outcomes with societal challenges.
Catalyzed further dialogue on interdisciplinary integration and the need for AI‑enabled assessment, influencing Patil’s vision of integrated school‑university ecosystems.
Speaker: Suresh Yadav
An AI‑oriented regulator could perform 70‑80 % of assessments; AI must also be used to develop Indian‑language content and preserve Indian knowledge systems.
Extends the earlier governance discussion to policy implementation, emphasizing AI’s role in standard‑setting and cultural preservation.
Steered the final part of the panel toward concrete policy recommendations, reinforcing earlier calls for AI‑driven assessment and prompting agreement from other panelists.
Speaker: Pankaj Sir (later segment)
Overall Assessment

The discussion evolved from a descriptive overview of AI usage in schools to a strategic, nation‑wide re‑imagining of education, driven by a handful of pivotal remarks. Professor Aggarwal’s caution about AI as a creative supplement set a philosophical baseline, which was expanded by Ramananji’s geopolitical framing of AI as a national imperative. Patil’s rapid‑adoption statistics injected urgency, while Pankaj’s techno‑based curriculum model and governance insights provided a practical roadmap. The Intel representative’s language‑translation example grounded the conversation in inclusive, on‑the‑ground technology, and Suresh’s call for a shift from consumption to creation reframed the purpose of higher education. Together, these comments redirected the panel from isolated observations to a cohesive vision of AI‑enabled, equity‑focused, and culturally rooted educational transformation.

Follow-up Questions
Investigate frequency and mitigation strategies for AI hallucinations experienced by students
Hallucinations can spread misinformation and undermine learning, so understanding their prevalence and how to reduce them is critical.
Speaker: Pranav Kothari, Aditi Nanda
Assess the accuracy of generative AI tools for logical and numerical tasks such as calculations
Low accuracy in STEM subjects limits AI usefulness and may affect student performance.
Speaker: Pranav Kothari
Compare the effectiveness of AI‑based learning tools versus traditional resources like YouTube and ICT‑based learning
Understanding relative benefits helps educators decide how to integrate AI alongside existing resources.
Speaker: Pranav Kothari
Explore adaptive‑learning capabilities of AI and its ability to personalize instruction for individual student needs
Personalization could improve outcomes, but current free models appear insufficient.
Speaker: Pranav Kothari
Examine why students still prefer human interaction over AI tutors and the implications for teaching models
Human interaction remains valued; AI should complement, not replace, teachers.
Speaker: Pranav Kothari
Research how higher education should be redesigned for 2050‑2100, including degree structures, infrastructure, and technology integration
Future economic and societal demands require a fundamentally new higher‑education system.
Speaker: Speaker 2 (unnamed senior participant)
Study AI’s role in breaking language barriers and enabling multilingual education across India
AI‑driven translation can democratize access to education for speakers of diverse Indian languages.
Speaker: Speaker 2, Patil Sir
Investigate the digital divide and uneven access to AI/technology in rural, tribal and under‑resourced schools
Equitable AI benefits require addressing infrastructure gaps.
Speaker: Pankaj Sir, Patil Sir
Assess AI literacy among teachers and develop effective training programmes
Teachers need AI competence to guide students and supervise AI tools.
Speaker: Patil Sir
Evaluate the impact of introducing an AI curriculum at early grades (e.g., third grade) on student understanding and attitudes
Early exposure may shape future AI competence and responsible use.
Speaker: Patil Sir
Quantify AI’s impact on labour productivity and economic output (e.g., reported 24% increase)
Measuring macro‑economic effects informs policy and investment decisions.
Speaker: Patil Sir
Develop ethical frameworks and guidelines for AI use in education to prevent misuse and over‑reliance
Ensures responsible deployment and protects students from harmful outcomes.
Speaker: Patil Sir, Pankaj Sir, Aditi Nanda
Explore AI‑driven assessment methods and the need for AI supervision in curriculum design
AI can streamline evaluation but requires oversight to maintain quality and fairness.
Speaker: Pankaj Sir
Promote development of AI models trained on Indian knowledge systems and regional languages
Reduces bias, enhances relevance, and preserves cultural heritage.
Speaker: Pankaj Sir
Design AI‑oriented regulatory bodies (e.g., Vixit Bharat Adhishthan) for teacher‑education assessment and standard‑setting
Automation can increase efficiency and consistency in teacher evaluation.
Speaker: Pankaj Sir
Shift from product‑only evaluation to process‑rich evidence of learning using AI analytics
Provides deeper insight into learning trajectories rather than single test scores.
Speaker: Pankaj Sir
Implement AI‑based mentorship matching platforms (NPST, NMM) to improve mentor‑mentee alignment
AI can analyse queries and anxiety to pair students with suitable mentors.
Speaker: Pankaj Sir
Use AI for dropout detection and intervention in schools to reduce high dropout rates
Early identification enables targeted support and improves retention.
Speaker: Patil Sir
Study integration models between school and higher‑education institutions facilitated by AI platforms
Improves continuity, resource sharing, and pathways from K‑12 to university.
Speaker: Patil Sir
Develop AI‑focused curricula spanning K‑12 to higher education, including industry‑linked projects (e.g., AI in manufacturing)
Aligns education with future workforce needs and provides practical experience.
Speaker: Aditi Nanda
Investigate on‑device AI inference to reduce reliance on cloud services and limit hallucinations
Improves privacy, reliability, and accessibility, especially in low‑connectivity areas.
Speaker: Aditi Nanda
Examine AI’s impact on creativity and critical thinking to avoid shortcuts that diminish cognitive development
Ensures AI augments rather than replaces human thought processes.
Speaker: Prof. KK Aggarwal
Research the evolving role of teachers as mentors, learning designers, and ethical guides in AI‑augmented classrooms
Defines future teacher competencies in an AI‑rich environment.
Speaker: Pankaj Sir
Analyze AI’s effect on employment patterns and overall economic growth in India
Understanding AI’s macro‑economic impact guides national strategy and education planning.
Speaker: Speaker 2 (unnamed senior participant)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.